Cloud Application Development | Cloud Computing Solutions

The advantages of cloud computing have been widely touted. Running your applications on cloud platform helps make your great ideas a reality. Cloud app development helps customers take benefit of the growing number of public and private Paas providers. 

Server-Side Rendering (SSR) Vue.js Application to Cloudflare Workers

Server-Side Rendering (SSR) Vue.js Application to Cloudflare Workers

What are Cloudflare Workers? In this article, we will publish a full-featured, server-side rendered (SSR) Vue.js Application to Cloudflare Workers

In this article, we will publish a full-featured, server-side rendered (SSR) Vue application to Cloudflare Workers. But before we begin, let’s talk about what exactly are Cloudflare Workers, define server-side rendering and compare this setup to a more conventional load-balanced architecture.

Table of Contents

How do Cloudflare Workers work?

Part of a growing edge computing trend, Cloudflare Workers, allows us to push JavaScript code to edge locations in 194 cities across 90 countries. Rather than orchestrating containers or virtual machines, Cloudflare Workers’ deal with isolates. Isolates are lightweight sandboxes for our code that, in turn, are managed by the V8 JavaScript and WebAssembly engine - used by Chromium and Node.js. This model practically eliminates the cold starts and overheads associated with the virtual machine or container model.

What is Server-Side Rendering?

Quite a lot of things happen when the user navigates to a website. The browser resolves the website domain name, initiates a TCP connection, sends an HTTP request to the webserver, receives the HTTP response.

The HTTP response, if there were no errors, contains the web page’s HTML content. It is at this stage that server-side rendering makes a difference. Without SSR, the browser gets a blank web page and sends out requests for JavaScript, CSS, and other assets. While the user is waiting, the browser receives, parses, and executes the JavaScript code, that in turn, renders the application.

With server-side rendering, the browser receives and renders the full HTML page from the webserver. This avoids the wait time needed to download, parse, and execute the JavaScript code in the browser. When the JavaScript application is loaded, it re(hydrates) or reuses the existing DOM tree and takes over the navigation and rendering.

Are there alternatives to Cloudflare Workers?

The closest alternative to Cloudflare Workers is AWS [email protected]. AWS [email protected] differs in a few key aspects: it features fewer edge locations, and it runs Node.js rather than the V8 engine, which contributes to longer cold start times.

The commonly deployed server-side rendering architecture consists of load balancer(s) distributing incoming traffic to several containerized or virtualized Node.js processes. If we were to match the Cloudflare Workers or AWS [email protected], we’d have to deploy this setup to every relevant region and availability zone. Cloudflare Workers do all this work for us.

Step-by-step instructions

To publish a server-rendered Vue application to Cloudflare, you’ll need a few things:

  • A Cloudflare account with Workers Unlimited plan, it costs $0.50 / million requests per month, with a minimum charge of $5 / month.
  • A Node.js installation running locally on your machine, and access to the command-line.

Make sure that you have the latest versions of Vue Cli and Cloudflare Wrangler:

$ npm i -g @cloudflare/wrangler @vue/cli

Note the Cloudflare Global API key and Workers Account Id. You can find the Global API key on the bottom of the Api Tokens page. The Workers Account Id is on the right sidebar of the Workers Overview Dashboard ; you may need to scroll down.

Next configure the Workers CLI, it will ask for the account email and the Global API key.

$ wrangler config
Enter Email:
[email protected]
Enter api key:
123456abcdef

Let’s clone the template I prepared. It is a modified Vue CLI project that follows the Server-Side Rendering Guide as closely as possible.

$ git clone https://github.com/l5x/vue-ssr-cloudflare-workers-template.git
$ cd vue-ssr-cloudflare-workers-template
$ npm install

wrangler.toml is the Cloudflare Workers configuration file. Update the account_id field and run:

$ npm run publish
...
 Built successfully, built project size is 554 KiB.
 Successfully published your script to https://vue-ssr-cloudflare-workers.YOUR-SUBDOMAIN.workers.dev

And that’s it. You should have a working server-side rendered Vue application with vue-router, vuex, (re)hydration, dynamic imports, critical CSS, and asset injection. Let’s go over the project’s structure and key configuration files.

client.config.js extends the Vue CLI webpack configuration, adds the VueSSRClientPlugin, and removes the HtmlWebpackPlugin and the PreloadWebpackPlugin. The entry for this build is src/entry-client.js.

worker.config.js extends the Vue CLI webpack configuration, removes the HtmlWebpackPlugin, PreloadWebpackPlugin, babel-loader, sets the correct environment variables, enables optimizeSSR in vue-loader options, filters out non-JavaScript assets and concatenates the output into a single file. The entry for this build is src/entry-worker.js.

vendor/vue-server-renderer/basic.js is a modified Vue Bundle Renderer that works in Cloudflare Workers’ environment. It only supports the renderToString method. See this commit for details.

src/entry-worker.js is the entry-point for the Cloudflare Workers. The Worker starts by subscribing to incoming requests i.e., listening to fetch events. For each incoming request, it checks if there are static assets available that match the request URL. If there is a match, Cloudflare Worker Sites responds with the static asset. If the request URL did not match any static assets, the Vue application takes over and matches the request URL to routes defined in src/router/index.js.

Next up

The setup described in this article should give you a good starting point. Next time we will add caching with Cloudflare Workers KV to this setup.

Learn how to deploy a static NuxtJS site to the Cloudflare Workers

Learn how to deploy a static NuxtJS site to the Cloudflare Workers

In this tutorial, you'll learn how to deploy your NuxtJS site to the Cloudflare Workers. First we'll create a new NuxtJS project and then we'll deploy it.

Recently Cloudflare announced Workers support for static sites. This opens up a whole new era of static website deployment. Cloudflare Workers is a Serverless platform that allows us to write and run JavaScript and WebAssembly on their Edge network. Now with the help of Cloudflare KV and Workers, we can deploy static websites built or generated with Nuxt, Hugo, Gatsby or Jekyll directly to their network.

Table of Contents

Workers Sites are very appealing due to

  • Low Cost
  • Very Fast
  • Secure
  • Easy to Scale

I am using Cloudflare Workers since last 1 year for different purposes. I am mostly working with VueJS for frontend development, and prefer NuxtJS for complex websites that requires support for routing, store management and many more.

When they announced support for static sites, first thing on my mind was that what if we can deploy NuxtJS site to Workers? This will be game changing, as I won't have to switch between multiple providers like Netlify, S3 static sites and others. Out of curiosity, I quickly boot-up a small NuxtJS project and deployed it. Here's the screenshot of full-fledged NuxtJS site deployed on Cloudflare Workers.

Deploying a static NuxtJS site to Cloudflare Workers

Prerequisites

In this tutorial, you'll learn how to deploy your NuxtJS site to the Cloudflare Workers. First we'll create a new NuxtJS project and then we'll deploy it.

To publish your site to Cloudflare Workers, you'll need:

  • A Cloudflare account with Workers Unlimited plan (it's just $5/m and includes first 10M requests)
  • Latest version of Wrangler CLI
  • Basic knowledge of VueJS/NuxtJS

Let's install Wrangler CLI (skip if already installed)

npm i @cloudflare/wrangler -g

Now, you'll need to configure Wrangler for your Cloudflare account. You can run config command to authenticate for your account.

wrangler config

It'll ask you for your Cloudflare email and your Global API key. You an obtain your Global API key from

  1. Click Get API Key below the API section to jump to your Profile page.
  2. Scroll to API Keys, and click View to copy your Global API Key.

Now we're ready to create a project.

Create a new NuxtJS Project

Let's start with creating an empty NuxtJS project. You can run the below command:

npx create-nuxt-app nuxt-cloudflare-workers

Here I am using npm package manager to create new NuxtJS project named nuxt-cloudflare-workers. It'll ask you some questions regarding project name, description, server-side framework, UI framework, Testing and more. you use the default options or choose what you're good at. It'll also install node_modules for you.

Let's preview our site locally. Run the below command to preview the site:

cd nuxt-cloudflare-workers
npm run dev

Now open up http://localhost:3000/ on your device and you'll see something like below.

Deploying a static NuxtJS site to Cloudflare Workers

Let's edit some text in pages/index.vue to look it better. I'll change title "nuxt-cloudflare-workers" to "NuxtJS" and change description to "Yay! It's running on Cloudflare Workers."

Do open up http://localhost:3000/ and confirm that the text is changed.

Create a Wrangler config

In previous step, we created a NuxtJS static site. In this step we will create a Wrangler configuration file.

Let's init Wrangler project:

wrangler init --site

It will create a sample wrangler.toml file in your root directory and create some files inside workers-site directory. It looks something like below:

account_id = ""
name = "nuxt-cloudflare-workers"
type = "webpack"
route = ""
workers_dev = true
zone_id = ""

[site]
bucket = ""
entry-point = "workers-site"

Let's customize this for our NuxtJS project. Here, account_id is your Cloudflare Account ID. If you are deploying it to your custom domain (other than workers.dev), you will need edit zone_id according to the Zone ID of your domain.

To obtain your Account ID & Zone ID:

  1. Log in to your Cloudflare account and select the desired domain.
  2. Select the Overview tab on the navigation bar.
  3. Scroll to the API section and select Click to copy to copy your Account ID.
  4. Copy your Zone ID.

You can edit the name of your Wrangler project, here I've kept it nuxt-cloudflare-workers. Thus, it'll be deployed to nuxt-cloudflare-workers.<workers-sub-domain>.workers.dev.

Now set the value of bucket to dist, as our NuxtJS will generate the static files inside dist directory.

The final wrangler.toml file will be something like below:

account_id = "<account-id>"
name = "nuxt-cloudflare-workers"
type = "webpack"
route = ""
workers_dev = true
zone_id = ""

[site]
bucket = "dist"
entry-point = "workers-site"

So, our NuxtJS project is ready to deploy. In this step, we configured our wrangler.toml file to deploy this project as a static Cloudflare Workers Site.

Deploy it to Cloudflare Workers

In this step, we'll build our site and deploy it.

Let's build our static site using:

npm run generate

Deploying a static NuxtJS site to Cloudflare Workers

This will create a static site inside dist directory with all the compiled html, css & js.

Now let's deploy it.

wrangler publish

This will deploy static site from dist directory to Cloudflare Workers. It might take some time to get it live.

You can go to https://nuxt-cloudflare-workers.iconscout.workers.dev and check it.

Adding new pages

I am adding more pages like about, contact, etc. to check if routing works properly.

I've created pages/about.vue and deployed it again.

You can check out new page at https://nuxt-cloudflare-workers.iconscout.workers.dev/about.

Testing Performance

Cloudflare Worker Site is deployed on the Cloudflare Edge network which gives it low latency, high speed and performance. I've checked the Google PageSpeed Insights of our newly deployed site and the results are mind blowing. It scored 100/100 with 50ms of first server response time.

Deploying a static NuxtJS site to Cloudflare Workers

Conclusion

Cloudflare is working hard to make developers life easy with their blazing fast CDN, DNS, Page Rules, Workers and many more. Now, with Cloudflare Worker Sites you will be able to deploy all static generated sites in VueJS, React, Next, NuxtJS, Hugo, Gatsby or Jekyll with a click. This can improve latency, speed and performance.

I hope you found this tutorial helpful and will be able to deploy your first Workers site right now. You can find a full source of the project available here on GitHub.

Going Serverless With Cloudflare Workers

Going Serverless With Cloudflare Workers

Cloudflare Workers lets devs build and extend the capabilities of serverless sites. In this article, we will learn how Cloudflare Workers works and when it makes sense to add it to our technology stack.

Originally published by Leonardo Losoviz at https://www.smashingmagazine.com

At its core, serverless is a strategy for a website’s architecture, based on deploying static files (the good old HTML, CSS and image files) on cloud-based hosting, and augmenting the website’s capabilities by accessing cloud-based, charge-per-use dynamic functionality. There is nothing mystical or mysterious about serverless: its end result is simply a website or application.

In spite of its name, “serverless” doesn’t mean “without a server”. It simply means “without my own server”. This means that my site is still hosted on some server, but by offloading this responsibility to the cloud services provider, I can devote all my energies to developing my own product (the website) and not have to worry about the infrastructure.

Serverless is very appealing for several reasons:

  • Low-cost

You only pay for what you use. Hosting static files on the cloud can cost just a few cents a month (or even be free in some cases).

  • Fast

Static files can be delivered to the user from a Content Delivery Network (CDN) located near the user.

  • Secure

The cloud provider constantly keeps the underlying platform up-to-date.

  • Easy to scale

The cloud provider’s business is to scale up the infrastructure on demand.

Serverless is also becoming increasingly popular due to the increasing availability of services offered by cloud providers, simple-yet-powerful template-based static site generators (such as Jekyll, Hugo or Gatsby) and convenient ways to feed data into the process (such as through one of the many git based CMS’s).

The Network Is The Computer: Introducing Cloudflare Workers

Cloudflare, one of the world’s largest cloud network platforms, is well versed in providing the benefits we are after through serverless: for some time now they have made their extensive CDN available to make our sites fast, offered DDoS protection to make our sites secure, and made their 1.1.1.1 DNS service free so we could afford having privacy on the Internet, among many other services.

Their new serverless offering, Cloudflare Workers (or simply “Workers”), runs on the same global cloud network of over 165 data centers that powers those services. Cloudflare Workers is a service that provides a lightweight JavaScript execution environment to augment existing applications or create new ones.

Being stacked on top of Cloudflare’s widespread network makes Cloudflare Workers a big deal. Cloudflare can scale up its infrastructure based on spikes in demand, serving a serverless application from locations on five continents and supporting millions of users, making our applications fast, reliable, and scalable.

The Cloudflare network is powered by 165 data centers around the world. (Large preview)

On top of that, Cloudflare Workers provides unique features that make it an even more compelling service. Let’s explore these in detail.

Architecture Based On V8 For Fast Access And Low Cost

The Cloudflare engineers went out of their way to architect Workers, as they proudly explain in depth. Whereas almost every provider offering cloud computing has an architecture based on containers and virtual machines, Workers uses “Isolates”, the technology that allows V8 (Google Chrome’s JavaScript engine) to run thousands of processes on a single server in an efficient and secure manner.

Compared to virtual machines, Isolates greatly reduce the overhead required to execute user code, which translates into faster execution and lower use of memory.

Isolates allow thousands of processes to run efficiently on a single machine (Large preview)

Cloudflare Workers is not the first serverless cloud computing platform in operation: for instance, Amazon has offered AWS Lambda and [email protected] However, as a consequence of the lower overhead produced by Isolates, Cloudflare claims that when executing a similar task, Workers beats the competition where it matters most: speed and money.

Lower Price

While a Worker offering 50 milliseconds of CPU costs $0.50 per million requests, the equivalent Lambda costs $1.84 per million. Hence, running Workers ends up being around 3x cheaper than Lambda per CPU-cycle.

Faster Access

The Cloudflare team ran tests comparing Workers against AWS Lambda and [email protected], and came to the conclusion that Workers is 441% faster than a Lambda function and 192% faster than [email protected]

This chart shows what percentage of requests to Lambda, [email protected], and Cloudflare Workers were faster than a given number of milliseconds. (Large preview)

The better performance achieved by Cloudflare Workers is confirmed by the third-party site serverless-benchmark.com, which measures the performance of serverless providers and provides continuously updated statistics.

Statistics for Overhead (the time from request to response without the actual time the function took) and Cold start (the latency it takes a function to respond to the event) for Cloudflare Workers and its competitors. (Large preview)

Coded In JavaScript, Modeled On The Service Workers API

Because it is based on V8, programming for Workers is done in those languages supported by V8: JavaScript and languages that support compilation to WebAssembly, such as Go and Rust. V8’s code is merged into Workers at least once a week, so we can always expect it to support the latest implemented flavor of ECMAScript.

Workers are modeled on the Service Workers available in modern web browsers, and they use the same API whenever possible. This is significant: Because Service Workers are part of the foundation to create a Progressive Web App (PWA), creating Workers is done through an API that developers are already familiar with (or may be in the process of learning) for creating modern web applications.

In addition, relying on the Service Workers API allows developers to boost their productivity since it allows isomorphism of code, i.e. the same code that powers the Service Worker can be used for a Cloudflare Worker. Even though this is not always feasible because of the different contexts (while a Service Worker runs in the browser, the Cloudflare Worker runs in the network), certain use cases could apply to both contexts.

For instance, among the Service Workers recipes described in serviceworke.rs, recipes for API Analytics, Load Balancer, and Dependency Injection can be implemented on both the client side and the network using the same code (or most of it). And even when the functionality makes sense only on either the client-side or the network, it can be partly implemented using chunks of code that are context-agnostic and can be conveniently reused.

Furthermore, using the same API for Service Workers and Cloudflare Workers makes it easy to provide progressive enhancement. An application can execute a Service Worker whenever possible, and fall back on a Cloudflare Worker when the user visits the site for the first time (when the Service Worker is still not installed on the client), or when Service Workers are not supported (for instance, when the browser is old, or it is just Opera mini).

Finally, relying on a unique API simplifies the overall language stack, once again making it easier for the developer to get more work done. For instance, defining a caching policy for the CDN in Varnish is done through the Varnish Configuration Language, which has its own syntax. Cloudflare Workers, though, enables develpers to code the same tasks through, you guessed it, the Service Workers API.

It Leverages The Modern Toolbox

In addition to Workers not requiring developers to learn any new language or API, it follows modern conventions and provides integration with popular technologies, allowing us to use our current toolbox:

Let’s See Some Practical Examples

It’s time to have fun! Let’s play with some Workers based on common use cases to see how we can augment our sites or even create new ones.

Cloudflare makes available a browser-based testing tool, the Cloudflare Workers Playground. This tool is very comprehensive and easy to use: simply copy the Worker script on the left-side panel, execute it against the URL defined on the top-right bar, see the results on the ‘Preview‘ tab and the source code on the ‘Testing‘ tab (from where we can also add custom headers), and execute console.log inside the script to bring the results on the DevTools on the bottom-right. To share (or also store) your script, you can simply copy your browser’s URL at that point in time.

The Playground allows us to test-drive our Cloudflare Workers (Large preview)

Starting with the Playground will take you far, but, at some point, you will want to test on the actual Cloudflare network and, even better, deploy your scripts for production. For this, your site must be set up with Cloudflare. If you already are a Cloudflare user, simply sign in, navigate to the ‘Workers’ tab on the dashboard, and you are ready to go.

If you are not a Cloudflare user, you can either sign up, or you can request a workers.dev subdomain, under which you will soon be able to deploy your Workers. The workers.dev site is currently accepting reservations of subdomains, so hurry up and reserve yours before it is taken by someone else!

workers.dev is currently accepting reservations of subdomains (Large preview)

The recipes below have been taken from the Cloudflare Workers Recipe cookbook, from the examples repository in Github, and from the Cloudflare blog. Each example has a link to the script code in the Playground.

Static Site Hosting

The most straightforward use case for Workers is to create a new site, dynamically responding to requests without needing to connect to an origin server at all. So, hello world!

addEventListener('fetch', event => {
  event.respondWith(new Response('<html><body><p>Hello world!</p></body></html>'))
})

Instead of printing the HTML output in the script, we can also host static HTML files with some hosting service, and fetch these with a simple Worker script. Indeed, the Worker script can retrieve the content of any file available on the Internet: While the domain under which the Worker is executed must be handled by Cloudflare, the origin website from which the script fetches content does not have to. And this works not just for HTML pages, but also for CSS and JS assets, images, and everything else.

The script below, for instance, renders a page that is hosted under DigitalOcean Spaces:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  const parsedUrl = new URL(request.url)
  let path = parsedUrl.pathname

  let lastSegment = path.substring(path.lastIndexOf('/'))
  if (lastSegment.indexOf('.') === -1) {
    path += '/index.html'
  }

  return fetch("https://cloudflare-example-space.nyc3.digitaloceanspaces.com" + path)
}

Building APIs

A prominent use case for Workers is creating APIs. For instance, the script below powers an API service that states if a domain redirects to HTTPS or not:

addEventListener('fetch', event => {
 event.respondWith(handleRequest(event.request))
})

/**

  • Fetch a request and follow redirects
  • @param {Request} request
    /
    async function handleRequest(request) {
     let headers = new Headers({
       'Content-Type': 'text/html',
       'Access-Control-Allow-Origin': '
    '
     })
     const SECURE_RESPONSE = new Response('secure', {status: 200, headers: headers})
     const INSECURE_RESPONSE = new Response('not secure', {status: 200, headers: headers})
     const NO_SUCH_SITE = new Response('website not found', {status: 200, headers: headers})

 let domain = new URL(request.url).searchParams.get('domain')
 if(domain === null) {
   return new Response('Please pass in domain via query string', {status: 404})
 }
 try {
   let resp = await fetch(http://${domain}, {headers: {'User-Agent': request.headers.get('User-Agent')}})
   if(resp.redirected == true && resp.url.startsWith('https')) {
     return SECURE_RESPONSE 
   }
   else if(resp.redirected == false && resp.status == 502) {
     return NO_SUCH_SITE
   }
   else {
     return INSECURE_RESPONSE
   }
 }
  catch (e) {
   return new Response(Something went wrong ${e}, {status: 404})
 }
}

Workers can also connect to several origins in parallel and combine all the responses into a single response. For instance, the script below powers an API service that simultaneously retrieves the price for several cryptocurrency coins:

addEventListener('fetch', event => {
    event.respondWith(fetchAndApply(event.request))
})
  
/**
 * Make multiple requests, 
 * aggregate the responses and 
 * send it back as a single response
 */
async function fetchAndApply(request) {
    const init = {
      method: 'GET',
      headers: {'Authorization': 'XXXXXX'}
    }
    const [btcResp, ethResp, ltcResp] = await Promise.all([
      fetch('https://api.coinbase.com/v2/prices/BTC-USD/spot', init),
      fetch('https://api.coinbase.com/v2/prices/ETH-USD/spot', init),
      fetch('https://api.coinbase.com/v2/prices/LTC-USD/spot', init)
    ])
  
    const btc = await btcResp.json()
    const eth = await ethResp.json()
    const ltc = await ltcResp.json()
  
    let combined = {}
    combined['btc'] = btc['data'].amount
    combined['ltc'] = ltc['data'].amount
    combined['eth'] = eth['data'].amount
  
    const responseInit = {
      headers: {'Content-Type': 'application/json'}
    }
    return new Response(JSON.stringify(combined), responseInit)
}

Making the API highly dynamic by retrieving data from a database is covered too! Workers KV is a global, low-latency, key-value data store. It is optimized for quick and frequent reads, and data should be saved sparingly. Then, it is a sensible approach to input data through the Cloudflare API:

curl "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/first-key" 
-X PUT
-H "X-Auth-Email: $CLOUDFLARE_EMAIL"
-H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY"
--data 'My first value!'

And then the values can be read from within the Worker script:

addEventListener('fetch', event => {
 event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
 const value = await FIRST_KV_NAMESPACE.get("first-key")
 if (value === null)
   return new Response("Value not found", {status: 404})

 return new Response(value)
}

At the time of writing, KV is still in beta and released only to beta testers. If you are interested in testing it out, you can reach out to the Cloudflare team and request access.

Geo-Targeting

Cloudflare detects the origin IP of the incoming request and appends a two-letter country code to header ‘Cf-Ipcountry’. The script below reads this header, obtains the country code, and then redirects to the corresponding site version if it exists:

addEventListener('fetch', event => {
 event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {

   const country = request.headers.get('Cf-Ipcountry').toLowerCase() 
   let url = new URL(request.url)

   const target_url = 'https://' + url.hostname + '/' + country
   const target_url_response = await fetch(target_url)

   if(target_url_response.status === 200) {
       return new Response('', {
         status: 302,
         headers: {
           'Location': target_url
         }
       })     
   } else {
       return fetch(request)
   }
}

A similar approach can apply to implement load balancing, choosing from among multiple origins to improve speed or reliability.

Enhanced Security

The scripts below add security rules and filters to block unwanted visitors and bots.

Ignore the POST and PUT HTTP requests:

addEventListener('fetch', event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {  
  if (request.method === 'POST' || request.method === 'PUT') {
    return new Response('Sorry, this page is not available.',
        { status: 403, statusText: 'Forbidden' })
  }

  return fetch(request)
}

Deny a spider or crawler:

addEventListener('fetch', event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {  
  if (request.headers.get('user-agent').includes('annoying_robot')) {
    return new Response('Sorry, this page is not available.',
        { status: 403, statusText: 'Forbidden' })
  }

  return fetch(request)
}

Prevent a specific IP from connecting:

addEventListener('fetch', event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {  
  if (request.headers.get('cf-connecting-ip') === '225.0.0.1') {
    return new Response('Sorry, this page is not available.',
        { status: 403, statusText: 'Forbidden' })
  }

  return fetch(request)
}

A/B Testing

We can easily create a Worker to control A/B tests:

addEventListener('fetch', event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {
  const name = 'experiment-0'
  let group          // 'control' or 'test', set below
  let isNew = false  // is the group newly-assigned?

  // Determine which group this request is in.
  const cookie = request.headers.get('Cookie')
  if (cookie && cookie.includes(${name}=control)) {
    group = 'control'
  } else if (cookie && cookie.includes(${name}=test)) {
    group = 'test'
  } else {
    // 50/50 Split
    group = Math.random() < 0.5 ? 'control' : 'test'
    isNew = true
  }

  // We'll prefix the request path with the experiment name. This way,
  // the origin server merely has to have two copies of the site under
  // top-level directories named "control" and "test".
  let url = new URL(request.url)
  // Note that url.pathname always begins with a /, so we don't
  // need to explicitly add one after ${group}.
  url.pathname = /${group}${url.pathname}

  request = new Request(url, request)

  let response = await fetch(request)

  if (isNew) {
    // The experiment was newly-assigned, so add a Set-Cookie header
    // to the response. We need to re-construct the response to make
    // the headers mutable.
    response = new Response(response.body, response)
    response.headers.append('Set-Cookie', ${name}=${group}; path=/)
  }

  return response
}

Serving Device-Based Content

The script below delivers different content based on the device being used:

addEventListener('fetch', event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {
  let uaSuffix = ''

  const ua = request.headers.get('user-agent')
  if (ua.match(/iphone/i) || ua.match(/ipod/i)) {
    uaSuffix = '/mobile'
  } else if (ua.match(/ipad/i)) {
    uaSuffix = '/tablet'
  }

  return fetch(request.url + uaSuffix, request)
}

Conditional Routing

By passing custom values through headers, we can fetch most-specific content:

addEventListener('fetch', event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {
  let suffix = ''
  //Assuming that the client is sending a custom header
  const cryptoCurrency = request.headers.get('X-Crypto-Currency')
  if (cryptoCurrency === 'BTC') {
    suffix = '/btc'
  } else if (cryptoCurrency === 'XRP') {
    suffix = '/xrp'
  } else if (cryptoCurrency === 'ETH') {
    suffix = '/eth'
  }

  return fetch(request.url + suffix, request)
}

Enhanced Performance

Workers makes available a Cache API through which we can save computationally intensive data and have it ready for immediate use from then on:

async function handleRequest(event) {
  let cache = caches.default
  let response = await cache.match(event.request)
      
  if (!response) {
    response = doSuperComputationallyHeavyThing()
    event.waitUntil(cache.put(event.request, response.clone()))
  }
        
  return  response
}

For instance, through the Cache API we can store GraphQL requests whose results have not changed:

async function handleRequest(event) {
  let cache = caches.default
  let response = await cache.match(event.request)
  
  if (!response){
    response = await fetch(event.request)
    if (response.ok) {
      event.waitUntil(cache.put(event.request, response.clone()))
    }
  }
        
  return response
}

Many Others

The list of useful applications goes on and on. Below are links to several additional examples:

Wrapping Up: “The Network Is The Computer”

Because speed matters, websites are going serverless. Cloudflare Workers is a new offering that enables this transition. It blurs the boundaries between the computer and the network, enabling developers to deploy apps globally that run on the fabric of the Internet itself, leveraging Cloudflare’s worldwide network of servers to run our code near where our users are located. It is fast, cheap, and secure, and it scales as much as we need it.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Serverless

Introduction to Serverless

Create and Deploy AWS and AWS Lambda using Serverless Framework

Running TensorFlow on AWS Lambda using Serverless

Easily Deploy a Serverless Node App with ZEIT Now

What Is Serverless Computing?

Building a Full-Stack Serverless App with Cloudflare Workers

Serverless PHP on App Engine and Cloud Firestore with Firevel (serverless Laravel framework)

How to deploy a Vue.js application to Kubernetes

How to deploy a Vue.js application to Kubernetes

In this tutorial, we are going to see how to deploy a Vue.js application to Kubernetes using DevSpace

Deploy Vue.js applications to Kubernetes in a few steps using DevSpace. Here are the commands we are going to use:

npm install -g vue-cli devspace
vue init webpack-simple my-vuejs-app && cd my-vuejs-app 
devspace init 
devspace create space my-vuejs-app 
devspace deploy 
devspace open

This is basically what these commands are doing:

  1. Create a Vue.js app (or just use your own project)
  2. Containerize our Vue.js app (Dockerfile & Helm Chart)
  3. Deploy it to Kubernetes (to the namespace my-vuejs-app)
  4. Stream the logs of the deployment
Prerequisites

We will use DevSpace, an open-source development tool for Kubernetes. Run this command to install DevSpace:

npm install -g devspace
1. Create a Vue.js Application (skip if you have one)

First, let’s install the Vue CLI globally:

npm install -g vue-cli

You might need to install it with sudo if you get _permission denied_ on Linux or Mac. Or better but more work: follow this guide to change your global NPM directory.

Let’s now ask Vue to setup a webpack template for us:

vue init webpack-simple my-vuejs-app 
cd my-vuejs-app

The application is now ready and we can deploy it to Kubernetes.

2. Containerize our Vue.js Application

To run our Vue.js app on Kubernetes, we need a Dockerfile and some Kubernetes manifests. Instead of creating these files manually, we can let DevSpace automatically create them for us with the following command:

devspace init

DevSpace will ask a couple of questions during the init command. Take a look at the following sections for more information.

Creating the Dockerfile for Vue.js

If you have no Dockerfile in your project, DevSpace will automatically detect that and suggest to create one for you.

If there is already a Dockerfile in your project, DevSpace will offer to use the existing Dockerfile instead of creating a new one.

? This project does not have a Dockerfile. What do you want to do? [Use arrows to move, space to select, type to filter] 
> Create a Dockerfile for me 
  Enter path to your Dockerfile 
  Enter path to your Kubernetes manifests 
  Enter path to your Helm chart Use existing image

Choose the first option Create a Dockerfile for me by hitting Enter and DevSpace will create a Dockerfile in your project directory.

Select Your Programming Language

Again, DevSpace will automatically detect your programming language (i.e. javascript), you just need to confirm by hitting Enter.

? Select the programming language of this project [Use arrows to move, space to select, type to filter] 
  java 
> javascript 
  none 
  php 
  python 
  ruby 
  typescript

Choose an Image Registry

DevSpace will build a Docker image from your Dockerfile. This image needs to be pushed to a Docker registry like Docker Hub.

? Which registry do you want to use for storing your Docker images? [Use arrows to move, space to select, type to filter] 
  Use hub.docker.com 
> Use dscr.io (free, private Docker registry) 
  Use other registry

If you already have a Docker Hub account, just select Docker Hub.

In case you don’t have a Docker Hub account or if you don’t know know which option to choose, you can just use dscr.io — it is fast, private and completely free. Once you selected dscr.io, DevSpace will open the login page. You can sign up with GitHub and it will take less than a minute.

Define The Application Port

Vue.js runs on port 8080 in dev mode. Type 8080 and hit Enter.

? Which port is your application listening on? (Enter to skip) 8080

It is important you type in the correct port. Otherwise, it will be a problem to open the app in the browser later on.

What just happened?

After the init command has terminated, you will find the following new files in your project:

my-vuejs-app/ 
 | 
 |--devspace.yaml 
 |--Dockerfile 
 |--.dockerignore

Besides the configuration for DevSpace in devspace.yaml, there is a new Dockerfile for building a Docker image for our Vue.js app which is needed to deploy the app to Kubernetes.

Dockerfile for Vue.js (for Development)

The Dockerfile will look like this:

FROM node:8.11.4  RUN mkdir /app 
WORKDIR /app  COPY package.json . 
RUN npm install  COPY . .  CMD ["npm", "start"] # <<<<<<< We need to change this

Note that this Dockerfile starts our Vue.js app in development mode.
For hosting the Vue.js app in production, it is recommended to use multi-stage Docker builds for creating static assets and then serve them with a web server such as nginx. I will publish a post about this very soon.

Subscribe to get an email about new articles if you are interested in reading my next post. But first, let’s continue with this tutorial…

The autogenerated Dockerfile is almost good to go. We just need to change the command used to start our app. If you open your package.json, you will see that there is no start command in the scripts. Instead, we want to run npm run dev. So, let's change that in the Dockerfile like this:

FROM node:8.11.4 RUN mkdir /app 
WORKDIR /app COPY package.json . 
RUN npm install COPY . . CMD ["npm", "run", "dev"]

This Dockerfile:

  • defines node as base image
  • creates the working directory /app
  • copies the package.json into the image and installs the dependencies
  • copies the rest of our application into the working directory
  • defines npm run dev as the start command for the container

Adding the package.json separately and installing dependencies before adding the rest of the application allows Docker to use layer-caching and saves a lot of time when building the image because the dependencies will only have to be re-installed when the package.json has changed and not every time we change any other source code file.

Helm Chart For Vue.js

To start containers in Kubernetes, we need to define so-called Kubernetes manifests. Instead of using plain Kubernetes manifests, it is recommended to bundle them into a so-called Helm chart. Helm is the package manager for Kubernetes and a Helm chart is a package that can be installed into a Kubernetes cluster using the open-source CLI tool Helm.

By default, DevSpace deploys your application using the so-called component chart which is a standardized and highly configurable Helm chart. To customize the deployment of your app, you can edit the deployments section of the devspace.yaml config file which looks similar to this one:

... 
deployments: 
- name: my-vuejs-app 
  helm: 
    chart: 
      name: component-chart 
      version: v0.0.6 
      repo: https://charts.devspace.cloud 
    values: 
      containers: 
      - image: dscr.io/${DEVSPACE_USERNAME}/devspace 
      service: 
        ports: 
        - port: 8080 
...
3. Deploy Your Vue.js App

Now that our Vue.js app is containerized, we are ready to deploy it to Kubernetes. We just need to tell DevSpace which Kubernetes cluster to use.

Option A: Create a Free, Hosted Kubernetes Namespace

At this point, I could tell you to create a Kubernetes cluster on Google Cloud or AWS, but do we really need to create an entire Kubernetes cluster just for deploying our small Vue.js app?

If you don’t have a Kubernetes cluster yet, it is much easier to create a ready-to-go Kubernetes namespace using DevSpace:

devspace create space my-vuejs-app

This command will create a free Kubernetes namespace on DevSpace Cloud which is hosted on Google Cloud.

Option B: Use Your Own Kubernetes Cluster (e.g. minikube)

DevSpace would not be called swiss army knife for Kubernetes if it didn’t work with any Kubernetes cluster. So, if you already have a Kubernetes cluster and want to use this one instead, you can also use a namespace within your own cluster using the following command:

devspace use namespace my-vuejs-app

DevSpace will create the namespace during the deployment process if it is not existing.

Build & Deploy Your App

Usually, this part of the tutorial would explain how to manually build a Docker image, push it to a registry and mess around with kubectl commands.

However, DevSpace automates all of this and you will just need to run one single command to deploy your app:

devspace deploy

This command takes a little while when you run it the very first time. If you run it later again, it will be much quicker.

Now it’s time to open your project.

devspace open

When DevSpace asks you how to open your application, choose the first option: via localhost

? How do you want to open your application? [Use arrows to move, space to select, type to filter] 
> via localhost (provides private access only on your computer via 
  port-forwarding) 
  via domain (makes your application publicly available via ingress)
4. Start Development

If you want to edit your files and see how you app automatically reloads, simply run:

devspace dev

DevSpace will deploy the Vue.js app in dev mode, open the app in the browser and start streaming the logs of your application.

Wait until your app has started, change a file and see hot reloading in action. Happy coding!

Final Thoughts

This tutorial gives a quick overview of how to deploy a Vue.js app to Kubernetes using DevSpace. Vue.js as a frontend application might be quite easy to deploy using platforms such as Netlify etc., but using Kubernetes provides a great level of control and portability and is also suitable for backend applications. DevSpace works with every Kubernetes cluster and every programming language, which lets you unify the way you deploy applications across different projects.

As mentioned above, this tutorial does not provide an exhaustive guide for creating a fully production-ready deployment of a Vue.js app (e.g. because it uses a simple Dockerfile to start the Vue.js app in dev mode). However, this article can serve as a starting point for everyone that is interested in using Kubernetes.

If you have any questions regarding any of the steps I have shown above or regarding Vue.js deployments on Kubernetes in general, feel free to leave a comment. I am happy to help whenever I can.

Top 70 AWS Architect Interview Questions and Answers In 2019

Top 70 AWS Architect Interview Questions and Answers In 2019

In this post, you'll see a list of AWS Architect Interview questions and answers that will most probably get asked during your interview.

Originally published at https://www.edureka.co
Why AWS Architect Interview Questions?

For the 7th straight year, Gartner placed Amazon Web Services in the “Leaders” quadrant. Also Forbes reported, AWS Certified Solutions Architect Leads the 15 Top Paying IT Certifications. Undoubtedly, AWS Solution Architect position is one of the most sought after amongst IT jobs. 

The AWS Solution Architect Role: With regards to AWS, a Solution Architect would design and define AWS architecture for existing systems, migrating them to cloud architectures as well as developing technical road-maps for future AWS cloud implementations. So, through this AWS Architect interview questions article, I will bring you top and frequently asked AWS interview questions.

Now in every section, we will start with aws basic interview questions, and then move towards AWS interview questions and answers for experienced people which are more technically challenging,

Section 1: What is Cloud Computing. Can you talk about and compare any two popular Cloud Service Providers?

For a detailed discussion on this topic, please refer our Cloud Computing. Following is the comparison between two of the most popular Cloud Service Providers:

Amazon Web Services Vs Microsoft Azure

1. Try this AWS scenario based interview question. I have some private servers on my premises, also I have distributed some of my workload on the public cloud, what is this architecture called?

  1. Virtual Private Network
  2. Private Cloud
  3. Virtual Private Cloud
  4. Hybrid Cloud

Answer D.

Explanation: This type of architecture would be a hybrid cloud. Why? Because we are using both, the public cloud, and your on premises servers i.e the private cloud. To make this hybrid architecture easy to use, wouldn’t it be better if your private and public cloud were all on the same network(virtually). This is established by including your public cloud servers in a virtual private cloud, and connecting this virtual cloud with your on premise servers using a VPN(Virtual Private Network).

Section 2: Amazon EC2 Interview Questions

2. What does the following command do with respect to the Amazon EC2 security groups?

ec2-create-group CreateSecurityGroup

  1. Groups the user created security groups into a new group for easy access.
  2. Creates a new security group for use with your account.
  3. Creates a new group inside the security group.
  4. Creates a new rule inside the security group.

Answer B.

Explanation: A Security group is just like a firewall, it controls the traffic in and out of your instance. In AWS terms, the inbound and outbound traffic. The command mentioned is pretty straight forward, it says create security group, and does the same. Moving along, once your security group is created, you can add different rules in it. For example, you have an RDS instance, to access it, you have to add the public IP address of the machine from which you want access the instance in its security group.

3. Here is aws scenario based interview question. You have a video trans-coding application. The videos are processed according to a queue. If the processing of a video is interrupted in one instance, it is resumed in another instance. Currently there is a huge back-log of videos which needs to be processed, for this you need to add more instances, but you need these instances only until your backlog is reduced. Which of these would be an efficient way to do it?

You should be using an On Demand instance for the same. Why? First of all, the workload has to be processed now, meaning it is urgent, secondly you don’t need them once your backlog is cleared, therefore Reserved Instance is out of the picture, and since the work is urgent, you cannot stop the work on your instance just because the spot price spiked, therefore Spot Instances shall also not be used. Hence On-Demand instances shall be the right choice in this case.

4. You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost effective way.

Which of the following will meet your requirements?

  1. Spot Instances
  2. Reserved instances
  3. Dedicated instances
  4. On-Demand instances

Answer: A

Explanation: Since the work we are addressing here is not continuous, a reserved instance shall be idle at times, same goes with On Demand instances. Also it does not make sense to launch an On Demand instance whenever work comes up, since it is expensive. Hence Spot Instances will be the right fit because of their low rates and no long term commitments.

5. How is stopping and terminating an instance different from each other?

Starting, stopping and terminating are the three states in an EC2 instance, let’s discuss them in detail:

  • Stopping and Starting an instance: When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance hours while the instance is in a stopped state.
  • Terminating an instance: When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start the instance again at a later time.

6. If I want my instance to run on a single-tenant hardware, which value do I have to set the instance’s tenancy attribute to?

  1. Dedicated
  2. Isolated
  3. One
  4. Reserved

Answer A.

Explanation: The Instance tenancy attribute should be set to Dedicated Instance. The rest of the values are invalid.

7. When will you incur costs with an Elastic IP address (EIP)?

  1. When an EIP is allocated.
  2. When it is allocated and associated with a running instance.
  3. When it is allocated and associated with a stopped instance.
  4. Costs are incurred regardless of whether the EIP is associated with a running instance.

Answer C.

Explanation: You are not charged, if only one Elastic IP address is attached with your running instance. But you do get charged in the following conditions:

  • When you use more than one Elastic IPs with your instance.
  • When your Elastic IP is attached to a stopped instance.
  • When your Elastic IP is not attached to any instance.

8. How is a Spot instance different from an On-Demand instance or Reserved Instance?

First of all, let’s understand that Spot Instance, On-Demand instance and Reserved Instances are all models for pricing. Moving along, spot instances provide the ability for customers to purchase compute capacity with no upfront commitment, at hourly rates usually lower than the On-Demand rate in each region. Spot instances are just like bidding, the bidding price is called Spot Price. The Spot Price fluctuates based on supply and demand for instances, but customers will never pay more than the maximum price they have specified. If the Spot Price moves higher than a customer’s maximum price, the customer’s EC2 instance will be shut down automatically. But the reverse is not true, if the Spot prices come down again, your EC2 instance will not be launched automatically, one has to do that manually. In Spot and On demand instance, there is no commitment for the duration from the user side, however in reserved instances one has to stick to the time period that he has chosen.

9. Are the Reserved Instances available for Multi-AZ Deployments?

  1. Multi-AZ Deployments are only available for Cluster Compute instances types
  2. Available for all instance types
  3. Only available for M3 instance types
  4. D. Not Available for Reserved Instances

Answer B.

Explanation: Reserved Instances is a pricing model, which is available for all instance types in EC2.

10. How to use the processor state control feature available on the c4.8xlarge instance?

The processor state control consists of 2 states:

  • The C state – Sleep state varying from c0 to c6. C6 being the deepest sleep state for a processor
  • The P state – Performance state p0 being the highest and p15 being the lowest possible frequency.

Now, why the C state and P state. Processors have cores, these cores need thermal headroom to boost their performance. Now since all the cores are on the processor the temperature should be kept at an optimal state so that all the cores can perform at the highest performance.

Now how will these states help in that? If a core is put into sleep state it will reduce the overall temperature of the processor and hence other cores can perform better. Now the same can be synchronized with other cores, so that the processor can boost as many cores it can by timely putting other cores to sleep, and thus get an overall performance boost.

Concluding, the C and P state can be customized in some EC2 instances like the c4.8xlarge instance and thus you can customize the processor according to your workload.

How to do it? You can refer this tutorial for the same.

11. What kind of network performance parameters can you expect when you launch instances in cluster placement group?

The network performance depends on the instance type and network performance specification, if launched in a placement group you can expect up to

  • 10 Gbps in a single-flow,
  • 20 Gbps in multiflow i.e full duplex
  • Network traffic outside the placement group will be limited to 5 Gbps(full duplex).

12. To deploy a 4 node cluster of Hadoop in AWS which instance type can be used?

First let’s understand what actually happens in a Hadoop cluster, the Hadoop cluster follows a master slave concept. The master machine processes all the data, slave machines store the data and act as data nodes. Since all the storage happens at the slave, a higher capacity hard disk would be recommended and since master does all the processing, a higher RAM and a much better CPU is required. Therefore, you can select the configuration of your machine depending on your workload. For e.g. – In this case c4.8xlarge will be preferred for master machine whereas for slave machine we can select i2.large instance. If you don’t want to deal with configuring your instance and installing hadoop cluster manually, you can straight away launch an Amazon EMR (Elastic Map Reduce) instance which automatically configures the servers for you. You dump your data to be processed in S3, EMR picks it from there, processes it, and dumps it back into S3.

13. Where do you think an AMI fits, when you are designing an architecture for a solution?

AMIs(Amazon Machine Images) are like templates of virtual machines and an instance is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you are launching an instance, some AMIs are not free, therefore can be bought from the AWS Marketplace. You can also choose to create your own custom AMI which would help you save space on AWS. For example if you don’t need a set of software on your installation, you can customize your AMI to do that. This makes it cost efficient, since you are removing the unwanted things.

14. How do you choose an Availability Zone?

Let’s understand this through an example, consider there’s a company which has user base in India as well as in the US.

Let us see how we will choose the region for this use case :

So, with reference to the above figure the regions to choose between are, Mumbai and North Virginia. Now let us first compare the pricing, you have hourly prices, which can be converted to your per month figure. Here North Virginia emerges as a winner. But, pricing cannot be the only parameter to consider. Performance should also be kept in mind hence, let’s look at latency as well. Latency basically is the time that a server takes to respond to your requests i.e the response time. North Virginia wins again!

So concluding, North Virginia should be chosen for this use case.

15. Is one Elastic IP address enough for every instance that I have running?

Depends! Every instance comes with its own private and public address. The private address is associated exclusively with the instance and is returned to Amazon EC2 only when it is stopped or terminated. Similarly, the public address is associated exclusively with the instance until it is stopped or terminated. However, this can be replaced by the Elastic IP address, which stays with the instance as long as the user doesn’t manually detach it. But what if you are hosting multiple websites on your EC2 server, in that case you may require more than one Elastic IP address.

16. What are the best practices for Security in Amazon EC2?

There are several best practices to secure Amazon EC2. A few of them are given below:

  • Use AWS Identity and Access Management (IAM) to control access to your AWS resources.
  • Restrict access by only allowing trusted hosts or networks to access ports on your instance.
  • Review the rules in your security groups regularly, and ensure that you apply the principle of least
  • Privilege – only open up permissions that you require.
  • Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.
Section 3: Amazon Storage

17. Another scenario based interview question. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?

  1. Set permissions on the object to public read during upload.
  2. Configure the bucket policy to set all objects to public read.
  3. Use AWS Identity and Access Management roles to set the bucket to public read.
  4. Amazon S3 objects default to public read, so no action is needed.

Answer B.

Explanation: Rather than making changes to every object, its better to set the policy for the whole bucket. IAM is used to give more granular permissions, since this is a website, all objects would be public by default.

18. A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named “company-backup”?

  1. A custom bucket policy limited to the Amazon S3 API in three Amazon Glacier archive “company-backup”
  2. A custom bucket policy limited to the Amazon S3 API in “company-backup”
  3. A custom IAM user policy limited to the Amazon S3 API for the Amazon Glacier archive “company-backup”.
  4. A custom IAM user policy limited to the Amazon S3 API in “company-backup”.

Answer D.

Explanation: Taking queue from the previous questions, this use case involves more granular permissions, hence IAM would be used here.

19. Can S3 be used with EC2 instances, if yes, how?

Yes, it can be used for instances with root devices backed by local instance storage. By using Amazon S3, developers have access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. In order to execute systems in the Amazon EC2 environment, developers use the tools provided to load their Amazon Machine Images (AMIs) into Amazon S3 and to move them between Amazon S3 and Amazon EC2.

Another use case could be for websites hosted on EC2 to load their static content from S3.

20. A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data?

  1. Restore by implementing a lifecycle policy on the Amazon S3 bucket.
  2. Make an Amazon Glacier Restore API call to load the files into another Amazon S3 bucket within four to six hours.
  3. Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot.
  4. Create an Amazon EBS volume from a gateway snapshot, and mount it to an Amazon EC2 instance.

Answer C.

Explanation: The fastest way to do it would be launching a new storage gateway instance. Why? Since time is the key factor which drives every business, troubleshooting this problem will take more time. Rather than we can just restore the previous working state of the storage gateway on a new instance.

21. When you need to move data over long distances using the internet, for instance across countries or continents to your Amazon S3 bucket, which method or service will you use?

  1. Amazon Glacier
  2. Amazon CloudFront
  3. Amazon Transfer Acceleration
  4. Amazon Snowball

Answer C.

Explanation: You would not use Snowball, because for now, the snowball service does not support cross region data transfer, and since, we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data transfer with the use of optimized network paths and Amazon’s content delivery network upto 300% compared to normal data transfer speed.

22. How can you speed up data transfer in Snowball?

The data transfer can be increased in the following way:

  • By performing multiple copy operations at one time i.e. if the workstation is powerful enough, you can initiate multiple cp commands each from different terminals, on the same Snowball device.
  • Copying from multiple workstations to the same snowball.
  • Transferring large files or by creating a batch of small file, this will reduce the encryption overhead.
  • Eliminating unnecessary hops i.e. make a setup where the source machine(s) and the snowball are the only machines active on the switch being used, this can hugely improve performance.
Section 4: AWS VPC

23. If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a predetermined private IP address you should:

  1. Launch the instance from a private Amazon Machine Image (AMI).
  2. Assign a group of sequential Elastic IP address to the instances.
  3. Launch the instances in the Amazon Virtual Private Cloud (VPC).
  4. Launch the instances in a Placement Group.

Answer C.

Explanation: The best way of connecting to your cloud resources (for ex- ec2 instances) from your own data center (for eg- private cloud) is a VPC. Once you connect your datacenter to the VPC in which your instances are present, each instance is assigned a private IP address which can be accessed from your datacenter. Hence, you can access your public cloud resources, as if they were on your own network.

24. Can I connect my corporate datacenter to the Amazon Cloud?

Yes, you can do this by establishing a VPN(Virtual Private Network) connection between your company’s network and your VPC (Virtual Private Cloud), this will allow you to interact with your EC2 instances as if they were within your existing network.

25. Is it possible to change the private IP addresses of an EC2 while it is running/stopped in a VPC?

Primary private IP address is attached with the instance throughout its lifetime and cannot be changed, however secondary private addresses can be unassigned, assigned or moved between interfaces or instances at any point.

26. Why do you make subnets?

  1. Because there is a shortage of networks
  2. To efficiently utilize networks that have a large no. of hosts.
  3. Because there is a shortage of hosts.
  4. To efficiently utilize networks that have a small no. of hosts.

Answer B.

Explanation: If there is a network which has a large no. of hosts, managing all these hosts can be a tedious job. Therefore we divide this network into subnets (sub-networks) so that managing these hosts becomes simpler.

27. Which of the following is true?

  1. You can attach multiple route tables to a subnet
  2. You can attach multiple subnets to a route table
  3. Both A and B
  4. None of these.

Answer B.

Explanation: Route Tables are used to route network packets, therefore in a subnet having multiple route tables will lead to confusion as to where the packet has to go. Therefore, there is only one route table in a subnet, and since a route table can have any no. of records or information, hence attaching multiple subnets to a route table is possible.

28. In CloudFront what happens when content is NOT present at an Edge location and a request is made to it?

  1. An Error “404 not found” is returned
  2. CloudFront delivers the content directly from the origin server and stores it in the cache of the edge location
  3. The request is kept on hold till content is delivered to the edge location
  4. The request is routed to the next closest edge location

Answer B. 

Explanation: CloudFront is a content delivery system, which caches data to the nearest edge location from the user, to reduce latency. If data is not present at an edge location, the first time the data may get transferred from the original server, but from the next time, it will be served from the cached edge.

29. If I’m using Amazon CloudFront, can I use Direct Connect to transfer objects from my own data center?

Yes. Amazon CloudFront supports custom origins including origins from outside of AWS. With AWS Direct Connect, you will be charged with the respective data transfer rates.

30. If my AWS Direct Connect fails, will I lose my connectivity?

If a backup AWS Direct connect has been configured, in the event of a failure it will switch over to the second one. It is recommended to enable Bidirectional Forwarding Detection (BFD) when configuring your connections to ensure faster detection and failover. On the other hand, if you have configured a backup IPsec VPN connection instead, all VPC traffic will failover to the backup VPN connection automatically. Traffic to/from public resources such as Amazon S3 will be routed over the Internet. If you do not have a backup AWS Direct Connect link or a IPsec VPN link, then Amazon VPC traffic will be dropped in the event of a failure.

Section 5: Amazon Database

31. If I launch a standby RDS instance, will it be in the same Availability Zone as my primary?

  1. Only for Oracle RDS types
  2. Yes
  3. Only if it is configured at launch
  4. No

Answer D.

Explanation: No, since the purpose of having a standby instance is to avoid an infrastructure failure (if it happens), therefore the standby instance is stored in a different availability zone, which is a physically different independent infrastructure.

32. When would I prefer Provisioned IOPS over Standard RDS storage?

  1. If you have batch-oriented workloads
  2. If you use production online transaction processing (OLTP) workloads.
  3. If you have workloads that are not sensitive to consistent performance
  4. All of the above

Answer A.

Explanation:  Provisioned IOPS deliver high IO rates but on the other hand it is expensive as well. Batch processing workloads do not require manual intervention they enable full utilization of systems, therefore a provisioned IOPS will be preferred for batch oriented workload.

33. How is Amazon RDS, DynamoDB and Redshift different?

  • Amazon RDS is a database management service for relational databases, it manages patching, upgrading, backing up of data etc. of databases for you without your intervention. RDS is a Db management service for structured data only.
  • DynamoDB, on the other hand, is a NoSQL database service, NoSQL deals with unstructured data.
  • Redshift, is an entirely different service, it is a data warehouse product and is used in data analysis.

34. If I am running my DB Instance as a Multi-AZ deployment, can I use the standby DB Instance for read or write operations along with primary DB instance?

  1. Yes
  2. Only with MySQL based RDS
  3. Only for Oracle RDS instances
  4. No

Answer D.

Explanation: No, Standby DB instance cannot be used with primary DB instance in parallel, as the former is solely used for standby purposes, it cannot be used unless the primary instance goes down.

35. Your company’s branch offices are all over the world, they use a software with a multi-regional deployment on AWS, they use MySQL 5.6 for data persistence.

The task is to run an hourly batch process and read data from every region to compute cross-regional reports which will be distributed to all the branches. This should be done in the shortest time possible. How will you build the DB architecture in order to meet the requirements?

  1. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
  2. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
  3. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
  4. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region

Answer A.

Explanation: For this we will take an RDS instance as a master, because it will manage our database for us and since we have to read from every region, we’ll put a read replica of this instance in every region where the data has to be read from. Option C is not correct since putting a read replica would be more efficient than putting a snapshot, a read replica can be promoted if needed to an independent DB instance, but with a Db snapshot it becomes mandatory to launch a separate DB Instance.

36. Can I run more than one DB instance for Amazon RDS for free?

Yes. You can run more than one Single-AZ Micro database instance, that too for free! However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances, across all eligible database engines and regions, will be billed at standard Amazon RDS prices. For example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.

37. Which AWS services will you use to collect and process e-commerce data for near real-time analysis?

  1. Amazon ElastiCache
  2. Amazon DynamoDB
  3. Amazon Redshift
  4. Amazon Elastic MapReduce

Answer B,C.

Explanation: DynamoDB is a fully managed NoSQL database service. DynamoDB, therefore can be fed any type of unstructured data, which can be data from e-commerce websites as well, and later, an analysis can be done on them using Amazon Redshift. We are not using Elastic MapReduce, since a near real time analyses is needed.

38. Can I retrieve only a specific element of the data, if I have a nested JSON data in DynamoDB?

Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can define a Projection Expression to determine which attributes should be retrieved from the table. Those attributes can include scalars, sets, or elements of a JSON document.

39. A company is deploying a new two-tier web application in AWS. The company has limited staff and requires high availability, and the application requires complex queries and table joins. Which configuration provides the solution for the company’s requirements?

  1. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone
  2. Amazon RDS for MySQL with Multi-AZ
  3. Amazon ElastiCache
  4. Amazon DynamoDB

Answer D.

Explanation: DynamoDB has the ability to scale more than RDS or any other relational database service, therefore DynamoDB would be the apt choice.

40. What happens to my backups and DB Snapshots if I delete my DB Instance?

When you delete a DB instance, you have an option of creating a final DB snapshot, if you do that you can restore your database from that snapshot. RDS retains this user-created DB snapshot along with all other manually created DB snapshots after the instance is deleted, also automated backups are deleted and only manually created DB Snapshots are retained.

41. Which of the following use cases are suitable for Amazon DynamoDB? Choose 2 answers

  1. Managing web sessions.
  2. Storing JSON documents.
  3. Storing metadata for Amazon S3 objects.
  4. Running relational joins and complex updates.

Answer C,D.

Explanation: If all your JSON data have the same fields eg [id,name,age] then it would be better to store it in a relational database, the metadata on the other hand is unstructured, also running relational joins or complex updates would work on DynamoDB as well.

42. How can I load my data to Amazon Redshift from different data sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?

You can load the data in the following two ways:

  • You can use the COPY command to load data in parallel directly to Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any SSH-enabled host.
  • AWS Data Pipeline provides a high performance, reliable, fault tolerant solution to load data from a variety of AWS data sources. You can use AWS Data Pipeline to specify the data source, desired data transformations, and then execute a pre-written import script to load your data into Amazon Redshift.

43. Your application has to retrieve data from your user’s mobile every 5 minutes and the data is stored in DynamoDB, later every day at a particular time the data is extracted into S3 on a per user basis and then your application is later used to visualize the data to the user. You are asked to optimize the architecture of the backend system to lower cost, what would you recommend?

  1. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
  2. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  3. Introduce Amazon Elasticache to cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  4. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

Answer C.

Explanation: Since our work requires the data to be extracted and analyzed, to optimize this process a person would use provisioned IO, but since it is expensive, using a ElastiCache memoryinsread to cache the results in the memory can reduce the provisioned read throughput and hence reduce cost without affecting the performance.

44. You are running a website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second and relies on an eventual consistency model. After comprehensive tests you discover that there is read contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2 answers)

  1. Deploy ElastiCache in-memory cache running in each availability zone
  2. Implement sharding to distribute load to multiple RDS MySQL instances
  3. Increase the RDS MySQL Instance size and Implement provisioned IOPS
  4. Add an RDS MySQL read replica in each availability zone

Answer A,C.

Explanation: Since it does a lot of read writes, provisioned IO may become expensive. But we need high performance as well, therefore the data can be cached using ElastiCache which can be used for frequently reading the data. As for RDS since read contention is happening, the instance size should be increased and provisioned IO should be introduced to increase the performance.

45. A startup is running a pilot deployment of around 100 sensors to measure street noise and air quality in urban areas for 3 months. It was noted that every month around 4GB of sensor data is generated. The company uses a load balanced auto scaled layer of EC2 instances and a RDS database with 500 GB standard storage. The pilot was a success and now they want to deploy at least 100K sensors which need to be supported by the backend. You need to store the data for at least 2 years to analyze it. Which setup of the following would you prefer?

  1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
  2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
  3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
  4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

Answer C.

Explanation: A Redshift cluster would be preferred because it easy to scale, also the work would be done in parallel through the nodes, therefore is perfect for a bigger workload like our use case. Since each month 4 GB of data is generated, therefore in 2 year, it should be around 96 GB. And since the servers will be increased to 100K in number, 96 GB will approximately become 96TB. Hence option C is the right answer.

Section 6: AWS Auto Scaling, AWS Load Balancer

46. Suppose you have an application where you have to render images and also do some general computing. From the following services which service will best fit your need?

  1. Classic Load Balancer
  2. Application Load Balancer
  3. Both of them
  4. None of these

Answer B.

Explanation: You will choose an application load balancer, since it supports path based routing, which means it can take decisions based on the URL, therefore if your task needs image rendering it will route it to a different instance, and for general computing it will route it to a different instance.

47. What is the difference between Scalability and Elasticity?

Scalability is the ability of a system to increase its hardware resources to handle the increase in demand. It can be done by increasing the hardware specifications or increasing the processing nodes.

Elasticity is the ability of a system to handle increase in the workload by adding additional hardware resources when the demand increases(same as scaling) but also rolling back the scaled resources, when the resources are no longer needed. This is particularly helpful in Cloud environments, where a pay per use model is followed.

48. How will you change the instance type for instances which are running in your application tier and are using Auto Scaling. Where will you change it from the following areas?

  1.  Auto Scaling policy configuration
  2.  Auto Scaling group
  3.  Auto Scaling tags configuration
  4.   Auto Scaling launch configuration

Answer D.

Explanation: Auto scaling tags configuration, is used to attach metadata to your instances, to change the instance type you have to use auto scaling launch configuration.

49. You have a content management system running on an Amazon EC2 instance that is approaching 100% CPU utilization. Which option will reduce load on the Amazon EC2 instance?

  1.   Create a load balancer, and register the Amazon EC2 instance with it
  2.  Create a CloudFront distribution, and configure the Amazon EC2 instance as the origin
  3.   Create an Auto Scaling group from the instance using the CreateAutoScalingGroup action
  4.   Create a launch configuration from the instance using the CreateLaunchConfigurationAction

Answer A.

Explanation:Creating alone an autoscaling group will not solve the issue, until you attach a load balancer to it. Once you attach a load balancer to an autoscaling group, it will efficiently distribute the load among all the instances. Option B – CloudFront is a CDN, it is a data transfer tool therefore will not help reduce load on the EC2 instance. Similarly the other option – Launch configuration is a template for configuration which has no connection with reducing loads.

50. When should I use a Classic Load Balancer and when should I use an Application load balancer?

A Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while an Application Load Balancer is ideal for microservices or container-based architectures where there is a need to route traffic to multiple services or load balance across multiple ports on the same EC2 instance.

51. What does Connection draining do?

  1.  Terminates instances which are not in use.
  2.  Re-routes traffic from instances which are to be updated or failed a health check.
  3.  Re-routes traffic from instances which have more workload to instances which have less workload.
  4.  Drains all the connections from an instance, with one click.

Answer B.

Explanation: Connection draining is a service under ELB which constantly monitors the health of the instances. If any instance fails a health check or if any instance has to be patched with a software update, it pulls all the traffic from that instance and re routes them to other instances.

52. When an instance is unhealthy, it is terminated and replaced with a new one, which of the following services does that?

  1.  Sticky Sessions
  2.  Fault Tolerance
  3.  Connection Draining
  4.  Monitoring

Answer B.

Explanation: When ELB detects that an instance is unhealthy, it starts routing incoming traffic to other healthy instances in the region. If all the instances in a region becomes unhealthy, and if you have instances in some other availability zone/region, your traffic is directed to them. Once your instances become healthy again, they are re routed back to the original instances.

53. What are lifecycle hooks used for in AutoScaling?

  1.   They are used to do health checks on instances
  2.  They are used to put an additional wait time to a scale in or scale out event.
  3.  They are used to shorten the wait time to a scale in or scale out event
  4.  None of these

Answer B.

Explanation: Lifecycle hooks are used for putting wait time before any lifecycle action i.e launching or terminating an instance happens. The purpose of this wait time, can be anything from extracting log files before terminating an instance or installing the necessary softwares in an instance before launching it.

54. A user has setup an Auto Scaling group. Due to some issue the group has failed to launch a single instance for more than 24 hours. What will happen to Auto Scaling in this condition?

  1. Auto Scaling will keep trying to launch the instance for 72 hours
  2. Auto Scaling will suspend the scaling process
  3. Auto Scaling will start an instance in a separate region
  4. The Auto Scaling group will be terminated automatically

Answer B.

Explanation: Auto Scaling allows you to suspend and then resume one or more of the Auto Scaling processes in your Auto Scaling group. This can be very useful when you want to investigate a configuration problem or other issue with your web application, and then make changes to your application, without triggering the Auto Scaling process.

Section 7: CloudTrail, Route 53

55. You have an EC2 Security Group with several running EC2 instances. You changed the Security Group rules to allow inbound traffic on a new port and protocol, and then launched several new instances in the same Security Group. The new rules apply:

  1. Immediately to all instances in the security group.
  2. Immediately to the new instances only.
  3. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.
  4. To all instances, but it may take several minutes for old instances to see the changes.

Answer A.

Explanation: Any rule specified in an EC2 Security Group applies immediately to all the instances, irrespective of when they are launched before or after adding a rule.

56. To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not need to be recreated in the second region? ( Choose 2 answers )

  1. Route 53 Record Sets
  2. Elastic IP Addresses (EIP)
  3. EC2 Key Pairs
  4. Launch configurations
  5. Security Groups

Answer A.

Explanation: Route 53 record sets are common assets therefore there is no need to replicate them, since Route 53 is valid across regions

57. A customer wants to capture all client connection information from his load balancer at an interval of 5 minutes, which of the following options should he choose for his application?

  1. Enable AWS CloudTrail for the loadbalancer.
  2. Enable access logs on the load balancer.
  3. Install the Amazon CloudWatch Logs agent on the load balancer.
  4. Enable Amazon CloudWatch metrics on the load balancer.

Answer A.

Explanation: AWS CloudTrail provides inexpensive logging information for load balancer and other AWS resources This logging information can be used for analyses and other administrative work, therefore is perfect for this use case.

58. A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the Customer requirement?

  1. Enable AWS CloudTrail to audit all Amazon S3 bucket access.
  2. Enable server access logging for all required Amazon S3 buckets.
  3. Enable the Requester Pays option to track access via AWS Billing
  4. Enable Amazon S3 event notifications for Put and Post.

Answer A.

Explanation: AWS CloudTrail has been designed for logging and tracking API calls. Also this service is available for storage, therefore should be used in this use case.

59. Which of the following are true regarding AWS CloudTrail? (Choose 2 answers)

  1. CloudTrail is enabled globally
  2. CloudTrail is enabled on a per-region and service basis
  3. Logs can be delivered to a single Amazon S3 bucket for aggregation.
  4. CloudTrail is enabled for all available services within a region.

Answer B,C.

Explanation: Cloudtrail is not enabled for all the services and is also not available for all the regions. Therefore option B is correct, also the logs can be delivered to your S3 bucket, hence C is also correct.

60. What happens if CloudTrail is turned on for my account but my Amazon S3 bucket is not configured with the correct policy?

CloudTrail files are delivered according to S3 bucket policies. If the bucket is not configured or is misconfigured, CloudTrail might not be able to deliver the log files.

61. How do I transfer my existing domain name registration to Amazon Route 53 without disrupting my existing web traffic?

You will need to get a list of the DNS record data for your domain name first, it is generally available in the form of a “zone file” that you can get from your existing DNS provider. Once you receive the DNS record data, you can use Route 53’s Management Console or simple web-services interface to create a hosted zone that will store your DNS records for your domain name and follow its transfer process. It also includes steps such as updating the nameservers for your domain name to the ones associated with your hosted zone. For completing the process you have to contact the registrar with whom you registered your domain name and follow the transfer process. As soon as your registrar propagates the new name server delegations, your DNS queries will start to get answered.

Section 8: AWS SQS, AWS SNS, AWS SES, AWS ElasticBeanstalk

62. Which of the following services you would not use to deploy an app?

  1. Elastic Beanstalk
  2. Lambda
  3. Opsworks
  4. CloudFormation

Answer B.

Explanation: Lambda is used for running server-less applications. It can be used to deploy functions triggered by events. When we say serverless, we mean without you worrying about the computing resources running in the background. It is not designed for creating applications which are publicly accessed.

63. How does Elastic Beanstalk apply updates?

  1. By having a duplicate ready with updates before swapping.
  2. By updating on the instance while it is running
  3. By taking the instance down in the maintenance window
  4. Updates should be installed manually

Answer A.

Explanation: Elastic Beanstalk prepares a duplicate copy of the instance, before updating the original instance, and routes your traffic to the duplicate instance, so that, incase your updated application fails, it will switch back to the original instance, and there will be no downtime experienced by the users who are using your application.

64. How is AWS Elastic Beanstalk different than AWS OpsWorks?

AWS Elastic Beanstalk is an application management platform while OpsWorks is a configuration management platform. BeanStalk is an easy to use service which is used for deploying and scaling web applications developed with Java, .Net, PHP, Node.js, Python, Ruby, Go and Docker. Customers upload their code and Elastic Beanstalk automatically handles the deployment. The application will be ready to use without any infrastructure or resource configuration.

In contrast, AWS Opsworks is an integrated configuration management platform for IT administrators or DevOps engineers who want a high degree of customization and control over operations.

65. What happens if my application stops responding to requests in beanstalk?

AWS Beanstalk applications have a system in place for avoiding failures in the underlying infrastructure. If an Amazon EC2 instance fails for any reason, Beanstalk will use Auto Scaling to automatically launch a new instance. Beanstalk can also detect if your application is not responding on the custom link, even though the infrastructure appears healthy, it will be logged as an environmental event( e.g a bad version was deployed) so you can take an appropriate action.

For a detailed discussion on this topic, please refer Lambda AWS blog.

Section 9: AWS OpsWorks, AWS KMS

66. How is AWS OpsWorks different than AWS CloudFormation?

OpsWorks and CloudFormation both support application modelling, deployment, configuration, management and related activities. Both support a wide variety of architectural patterns, from simple web applications to highly complex applications. AWS OpsWorks and AWS CloudFormation differ in abstraction level and areas of focus.

AWS CloudFormation is a building block service which enables customer to manage almost any AWS resource via JSON-based domain specific language. It provides foundational capabilities for the full breadth of AWS, without prescribing a particular model for development and operations. Customers define templates and use them to provision and manage AWS resources, operating systems and application code.

In contrast, AWS OpsWorks is a higher level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded developers. To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers, and provides integrated experiences for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

67. I created a key in Oregon region to encrypt my data in North Virginia region for security purposes. I added two users to the key and an external AWS account. I wanted to encrypt an object in S3, so when I tried, the key that I just created was not listed. What could be the reason? 

  1. External aws accounts are not supported.
  2. AWS S3 cannot be integrated KMS.
  3. The Key should be in the same region.
  4. New keys take some time to reflect in the list.

Answer C.

Explanation: The key created and the data to be encrypted should be in the same region. Hence the approach taken here to secure the data is incorrect.

68. A company needs to monitor the read and write IOPS for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this?

  1. Amazon Simple Email Service
  2. Amazon CloudWatch
  3. Amazon Simple Queue Service
  4. Amazon Route 53

Answer B.

Explanation: Amazon CloudWatch is a cloud monitoring tool and hence this is the right service for the mentioned use case. The other options listed here are used for other purposes for example route 53 is used for DNS services, therefore CloudWatch will be the apt choice.

69. What happens when one of the resources in a stack cannot be created successfully in AWS OpsWorks?

When an event like this occurs, the “automatic rollback on error” feature is enabled, which causes all the AWS resources which were created successfully till the point where the error occurred to be deleted. This is helpful since it does not leave behind any erroneous data, it ensures the fact that stacks are either created fully or not created at all. It is useful in events where you may accidentally exceed your limit of the no. of Elastic IP addresses or maybe you may not have access to an EC2 AMI that you are trying to run etc.

70. What automation tools can you use to spinup servers?

Any of the following tools can be used:

  • Roll-your-own scripts, and use the AWS API tools. Such scripts could be written in bash, perl or other language of your choice.
  • Use a configuration management and provisioning tool like puppet or its successor Opscode Chef. You can also use a tool like Scalr.
  • Use a managed solution such as Rightscale.

Overwhelmed with all these questions?

I hope you enjoyed these AWS Interview Questions. The topics that you learnt in this AWS Architect Interview questions blog are the most sought-after skill sets that recruiters look for in an AWS Solution Architect Professional. I have tried touching up on AWS interview questions and answers for freshers whereas you would also find AWS interview questions for people with 3-5 years of experience. However, for a more detailed study on AWS, you can refer our AWS Tutorial.

Got a question for us? Please mention it in the comments section of this AWS Architect Interview Questions and we will get back to you.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

AWS Certified Solution Architect Associate

AWS Lambda vs. Azure Functions vs. Google Functions

Running TensorFlow on AWS Lambda using Serverless

Deploy Docker Containers With AWS CodePipeline

A Complete Guide on Deploying a Node app to AWS with Docker

Create and Deploy AWS and AWS Lambda using Serverless Framework

Introduction To AWS Lambda