Michio JP

Michio JP


From YAML to TypeScript: a developer’s view on cloud automation

The rise of managed cloud services, cloud-native, and serverless applications brings both new possibilities and challenges. More and more practices from software development processes like version control, code review, continuous integration, and automated testing are applied to cloud infrastructure automation.

Most existing tools suggest defining infrastructure in text-based markup formats, YAML being the favorite. In this article, I’m making a case for using real programming languages like TypeScript instead. Such a change makes even more software development practices applicable to the infrastructure realm.

Sample Application

It’s easier to make a case given a specific example. For this article, we’ll build a URL Shortener application, a basic clone of tinyurl.com or bit.ly. There is an administrative page where we can define short aliases for long URLs:

Now, whenever a visitor goes to the base URL of the application + an existing alias, they get redirected to the full URL.

This app is simple to describe but involves enough moving parts to be representative of some real-world issues. As a bonus, there are many existing implementations on the web to compare with.

Serverless URL Shortener

I’m a big proponent of serverless architecture: the style of cloud applications being a combination of serverless functions and managed cloud services. They are fast to develop, effortless to run, and cost pennies unless the application gets lots of users. However, even serverless applications have to deal with infrastructure, like databases, queues, and other sources of events and destinations of data.

My examples are going to use Amazon’s AWS, but this could be Microsoft Azure or Google Cloud Platform too.

So, the gist is to store URLs with short names as key-value pairs in Amazon DynamoDB and use AWS Lambdas to run the application code. Here is the initial sketch:

The Lambda at the top receives an event when somebody decides to add a new URL. It extracts the name and the URL from the request and saves them as an item in the DynamoDB table.

The Lambda at the bottom is called whenever a user navigates to a short URL. The code reads the full URL based on the requested path and returns a 301 response with the corresponding location.

Here is the implementation of the Open URL Lambda in JavaScript:

const aws = require('aws-sdk');
const table = new aws.DynamoDB.DocumentClient(); 
exports.handler = async (event) => { 
  const name = event.path.substring(1); 
  const params = { TableName: "urls", Key: { "name": name } }; 
  const value = await table.get(params).promise(); 
  const url = value && value.Item && value.Item.url; 
  return url 
    ? { statusCode: 301, body: "", headers: { "Location": url } } 
    : { statusCode: 404, body: name + " not found" }; 

That’s 11 lines of code. I’ll skip the implementation of Add URL function because it’s very similar. Considering a third function to list the existing URLs for UI, we might end up with 30-40 lines of JavaScript in total.

So, how do we deploy the application?

Well, before we do that, we should realize that the above picture was an over-simplification:

  • AWS Lambda can’t handle HTTP requests directly, so we need to add AWS API Gateway in front of it.
  • We also need to serve some static files for the UI, which we’ll put into AWS S3 and proxy it with the same API Gateway.

Here is the updated diagram:

This is a viable design, but the details are even more complicated:

  • API Gateway is a complex beast which needs Stages, Deployments, and REST Endpoints to be appropriately configured.
  • Permissions and Policies need to be defined so that API Gateway could call Lambda and Lambda could access DynamoDB.
  • Static Files should go to S3 Bucket Objects.

So, the actual setup involves a couple of dozen objects to be configured in AWS:

How do we approach this task?

Options to Provision the Infrastructure

There are many options to provision a cloud application, and each one has its trade-offs. Let’s quickly go through the list of possibilities to understand the landscape.

AWS Web Console

AWS, like any other cloud, has a web user interface to configure its resources:

That’s a decent place to start — good for experimenting, figuring out the available options, following the tutorials, i.e., for exploration.

However, it doesn’t suit particularly well for long-lived ever-changing applications developed in teams. A manually clicked deployment is pretty hard to reproduce in the exact manner, which becomes a maintainability issue pretty fast.

AWS Command Line Interface

The AWS Command Line Interface (CLI) is a unified tool to manage all AWS services from a command prompt. You write the calls like this:

aws apigateway create-rest-api --name 'My First API' --description 'This is my first API' 
aws apigateway create-stage --rest-api-id 1234123412 --stage-name 'dev' --description 'Development stage' --deployment-id a1b2c3

The initial experience might not be as smooth as clicking buttons in the browser, but the huge benefit is that you can reuse commands that you once wrote. You can build scripts by combining many commands into cohesive scenarios. So, your colleague can benefit from the same script that you created. You can provision multiple environments by parameterizing the scripts.

Frankly speaking, I’ve never done that for several reasons:

  • CLI scripts feel too imperative to me. I have to describe “how” to do things, not “what” I want to get in the end.
  • There seems to be no good story for updating existing resources. Do I write small delta scripts for each change? Do I have to keep them forever and run the full suite every time I need a new environment?
  • If a failure occurs mid-way through the script, I need to manually repair everything to a consistent state. This gets messy real quick, and I have no desire to exercise this process, especially in production.

To overcome such limitations, the notion of the Desired State Configuration (DSC) was invented. Under this paradigm, we describe the desired layout of the infrastructure, and then the tooling takes care of either provisioning it from scratch or applying the required changes to an existing environment.

Which tool provides DSC model for AWS? There are legions.

AWS CloudFormation

AWS CloudFormation is the first-party tool for Desired State Configuration management from Amazon. CloudFormation templates use YAML to describe all the infrastructure resources of AWS.

Here is a snippet from a private URL shortener example kindly provided on the AWS blog:

  Type: "AWS::S3::Bucket"
  DeletionPolicy: Delete
    BucketName: !If ["CreateNewBucket", !Ref "AWS::NoValue", !Ref S3BucketName ]
      IndexDocument: "index.html"
          Id: DisposeShortUrls
          ExpirationInDays: !Ref URLExpiration
          Prefix: "u"
         Status: Enabled

This is just a very short fragment: the complete example consists of 317 lines of YAML. That’s an order of magnitude more than the actual JavaScript code that we have in the application!

CloudFormation is a powerful tool, but it demands quite some learning to be done to master it. Moreover, it’s specific to AWS: you won’t be able to transfer the skill to other cloud providers.

Wouldn’t it be great if there was a universal DSC format? Meet Terraform.


HashiCorp Terraform is an open source tool to define infrastructure in declarative configuration files. It has a pluggable architecture, so the tool supports all major clouds and even hybrid scenarios.

The custom text-based Terraform .tf format is used to define the configurations. The templating language is quite powerful, and once you learn it, you can use it for different cloud providers.

Here is a snippet from AWS Lambda Short URL Generator example:

resource "aws_api_gateway_rest_api" "short_urls_api_gateway" {
  name        = "Short URLs API"
  description = "API for managing short URLs."
resource "aws_api_gateway_usage_plan" "short_urls_api_usage_plan" {
  name         = "Short URLs admin API key usage plan"
  description  = "Usage plan for the admin API key for Short URLS."
  api_stages {
    api_id = "${aws_api_rest_api.short_urls_gateway.id}"
    stage  = "${aws_api_deployment.short_url_deployment.stage_name}"

This time, the complete example is around 450 lines of textual templates. Are there ways to reduce the size of the infrastructure definition?

Yes, by raising the level of abstraction. It’s possible with Terraform’s modules, or by using other, more specialized tools.

Serverless Framework and SAM

The Serverless Framework is an infrastructure management tool focused on serverless applications. It works across cloud providers (AWS support is the strongest though) and only exposes features related to building applications with cloud functions.

The benefit is that it’s much more concise. Once again, the tool is using YAML to define the templates, here is the snippet from Serverless URL Shortener example:

    handler: api/store.handle
      - http:
          path: /
          method: post
          cors: true

The domain-specific language yields a shorter definition: this example has 45 lines of YAML + 123 lines of JavaScript functions.

However, the conciseness has a flip side: as soon as you veer outside of the fairly “thin” golden path — the cloud functions and an incomplete list of event sources — you have to fall back to more generic tools like CloudFormation. As soon as your landscape includes lower-level infrastructure work or some container-based components, you’re stuck using multiple config languages and tools again.

Amazon’s AWS Serverless Application Model (SAM) looks very similar to the Serverless Framework but is tailored to be AWS-specific.

Is that the end game? I don’t think so.

Desired Properties of Infrastructure Definition Tool

So what have we learned while going through the existing landscape? The perfect infrastructure tools should:

  • Provide reproducible results of deployments
  • Be scriptable, i.e., require no human intervention after the definition is complete
  • Define the desired state rather than exact steps to achieve it
  • Support multiple cloud providers and hybrid scenarios
  • Be universal in the sense of using the same tool to define any type of resource
  • Be succinct and concise to stay readable and manageable
  • ̶U̶̶̶s̶̶̶e̶̶̶ ̶̶̶Y̶̶̶A̶̶̶M̶̶̶L̶̶̶-̶̶̶b̶̶̶a̶̶̶s̶̶̶e̶̶̶d̶̶̶ ̶̶̶f̶̶̶o̶̶̶r̶̶̶m̶̶̶a̶̶̶t̶̶̶

Nah, I crossed out the last item. YAML seems to be the most popular language among this class of tools (and I haven’t even touched Kubernetes yet!), but I’m not convinced it works well for me. YAML has many flaws, and I just don’t want to use it.

Have you noticed that I haven’t mentioned Infrastructure as code a single time yet? Well, here we go (from Wikipedia):

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.

Shouldn’t it be called “Infrastructure as definition files”, or “Infrastructure as YAML”?

As a software developer, what I really want is “Infrastructure as actual code, you know, the program thing”. I want to use the same language that I already know. I want to stay in the same editor. I want to get IntelliSense auto-completion when I type. I want to see the compilation errors when what I typed is not syntactically correct. I want to reuse the developer skills that I already have. I want to come up with abstractions to generalize my code and create reusable components. I want to leverage the open-source community who would create much better components than I ever could. I want to combine the code and infrastructure in one code project.

If you are with me on that, keep reading. You get all of that with Pulumi.


Pulumi is a tool to build cloud-based software using real programming languages. They support all major cloud providers, plus Kubernetes.

Pulumi programming model supports Go and Python too, but I’m going to use TypeScript for the rest of the article.

While prototyping a URL shortener, I explain the fundamental way of working and illustrate the benefits and some trade-offs. If you want to follow along, install Pulumi.

How Pulumi Works

Let’s start defining our URL shortener application in TypeScript. I installed @pulumi/pulumi and @pulumi/aws NPM modules so that I can start the program. The first resource to create is a DynamoDB table:

import * as aws from "@pulumi/aws";

// A DynamoDB table with a single primary key
let counterTable = new aws.dynamodb.Table(“urls”, {
name: “urls”,
attributes: [
{ name: “name”, type: “S” },
hashKey: “name”,
readCapacity: 1,
writeCapacity: 1

I use pulumi CLI to run this program to provision the actual resource in AWS:

> pulumi up
Previewing update (urlshortener):
Type Name Plan

  • pulumi:pulumi:Stack urlshortener create
  • aws:dynamodb:Table urls create
    + 2 to create
    Do you want to perform this update? yes
    Updating (urlshortener):
    Type Name Status
  • pulumi:pulumi:Stack urlshortener created
  • aws:dynamodb:Table urls created
    + 2 created

The CLI first shows the preview of the changes to be made, and when I confirm, it creates the resource. It also creates a **stack *— a container for all the resources of the application.

This code might look like an imperative command to create a DynamoDB table, but it actually isn’t. If I go ahead and change readCapacity to 2 and then re-run pulumi up, it produces a different outcome:

> pulumi up
Previewing update (urlshortener):
Type Name Plan
pulumi:pulumi:Stack urlshortener
~ aws:dynamodb:Table urls update [diff: ~readCapacity]
~ 1 to update 1 unchanged

It detects the exact change that I made and suggests an update. The following picture illustrates how Pulumi works:

index.ts in the red square is my program. Pulumi’s language host understands TypeScript and translates the code to commands to the internal engine. As a result, the engine builds a tree of resources-to-be-provisioned, the desired state of the infrastructure.

The end state of the last deployment is persisted in the storage (can be in pulumi.com backend or a file on disk). The engine then compares the current state of the system with the desired state of the program and calculates the delta in terms of create-update-delete commands to the cloud provider.

Help Of Types

Now I can proceed to the code that defines a Lambda function:

// Create a Role giving our Lambda access.
let policy: aws.iam.PolicyDocument = { /
Redacted for brevity */ };
let role = new aws.iam.Role(“lambda-role”, {
assumeRolePolicy: JSON.stringify(policy),
let fullAccess = new aws.iam.RolePolicyAttachment(“lambda-access”, {
role: role,
policyArn: aws.iam.AWSLambdaFullAccess,

// Create a Lambda function, using code from the ./app folder.
let lambda = new aws.lambda.Function(“lambda-get”, {
runtime: aws.lambda.NodeJS8d10Runtime,
code: new pulumi.asset.AssetArchive({
“.”: new pulumi.asset.FileArchive(“./app”),
timeout: 300,
handler: “read.handler”,
role: role.arn,
environment: {
variables: {
“COUNTER_TABLE”: counterTable.name
}, { dependsOn: [fullAccess] });

You can see that the complexity kicked in and the code size is growing. However, now I start to gain real benefits from using a typed programming language:

  • I’m using objects in the definitions of other object’s parameters. If I misspell their name, I don’t get a runtime failure but an immediate error message from the editor.
  • If I don’t know which options I need to provide, I can go to the type definition and look it up (or use IntelliSense).
  • If I forget to specify a mandatory option, I get a clear error.
  • If the type of the input parameter doesn’t match the type of the object I’m passing, I get an error again.
  • I can use language features like JSON.stringify right inside my program. In fact, I can reference and use any NPM module.

You can see the code for API Gateway here. It looks too verbose, doesn’t it? Moreover, I’m only half-way through with only one Lambda function defined.

Reusable Components

We can do better than that. Here is the improved definition of the same Lambda function:

import { Lambda } from “./lambda”;

const func = new Lambda(“lambda-get”, {
path: “./app”,
file: “read”,
environment: {
“COUNTER_TABLE”: counterTable.name

Now, isn’t that beautiful? Only the essential options remained, while all the machinery is gone. Well, it’s not completely gone, it’s been hidden behind an abstraction.

I defined a custom component called Lambda:

export interface LambdaOptions {
readonly path: string;
readonly file: string;

readonly environment?:  pulumi.Input<{
    [key: string]: pulumi.Input<string>;


export class Lambda extends pulumi.ComponentResource {
public readonly lambda: aws.lambda.Function;

constructor(name: string,
    options: LambdaOptions,
    opts?: pulumi.ResourceOptions) {
    super("my:Lambda", name, opts);

    const role = //... Role as defined in the last snippet
    const fullAccess = //... RolePolicyAttachment as defined in the last snippet
    this.lambda = new aws.lambda.Function(`${name}-func`, {
        runtime: aws.lambda.NodeJS8d10Runtime,
        code: new pulumi.asset.AssetArchive({
            ".": new pulumi.asset.FileArchive(options.path),
        timeout: 300,
        handler: `${options.file}.handler`,
        role: role.arn,
        environment: {
            variables: options.environment
    }, { dependsOn: [fullAccess], parent: this });


The interface LambdaOptions defines options that are important for my abstraction. The class Lambda derives from pulumi.ComponentResource and creates all the child resources in its constructor.

A nice effect is that one can see the structure in pulumi preview:

> pulumi up
Previewing update (urlshortener):
Type Name Plan

  • pulumi:pulumi:Stack urlshortener create
  • my:Lambda lambda-get create
  •  aws:iam:Role                  lambda-get-role    create
  •  aws:iam:RolePolicyAttachment  lambda-get-access  create 
  •  aws:lambda:Function           lambda-get-func    create 
  • aws:dynamodb:Table urls create

The Endpoint component simplifies the definition of API Gateway (see the source):

const api = new Endpoint(“urlapi”, {
path: “/{proxy+}”,
lambda: func.lambda

The component hides the complexity from the clients — if the abstraction was selected correctly, that is. The component class can be reused in multiple places, in several projects, across teams, etc.

Standard Component Library

In fact, the Pulumi team came up with lots of high-level components that build abstractions on top of raw resources. The components from the @pulumi/cloud-aws package are particularly useful for serverless applications.

Here is the full URL shortener application with DynamoDB table, Lambdas, API Gateway, and S3-based static files:

import * as aws from “@pulumi/cloud-aws”;

// Create a table urls, with name as primary key.
let urlTable = new aws.Table(“urls”, “name”);

// Create a web server.
let endpoint = new aws.API(“urlshortener”);

// Serve all files in the www directory to the root.
endpoint.static(“/”, “www”);

// GET /url/{name} redirects to the target URL based on a short-name.
endpoint.get(“/url/{name}”, async (req, res) => {
let name = req.params[“name”];
let value = await urlTable.get({name});
let url = value && value.url;

// If we found an entry, 301 redirect to it; else, 404.
if (url) {
    res.setHeader("Location", url);
else {


// POST /url registers a new URL with a given short-name.
endpoint.post(“/url”, async (req, res) => {
let url = req.query[“url”];
let name = req.query[“name”];
await urlTable.insert({ name, url });
res.json({ shortenedURLName: name });

export let endpointUrl = endpoint.publish().url;

The coolest thing here is that the actual implementation code of AWS Lambdas is intertwined with the definition of resources. The code looks very similar to an Express application. AWS Lambdas are defined as TypeScript lambdas. All strongly typed and compile-time checked.

It’s worth noting that at the moment such high-level components only exist in TypeScript. One could create their custom components in Python or Go, but there is no standard library available. Pulumi folks are actively trying to figure out a way to bridge this gap.

Avoiding Vendor Lock-in?

If you look closely at the previous code block, you notice that only one line is AWS-specific: the import statement. The rest is just naming.

We can get rid of that one too: just change the import to import * as cloud from “@pulumi/cloud”; and replace aws. with cloud. everywhere. Now, we’d have to go to the stack configuration file and specify the cloud provider there:

cloud:provider: aws

Which is enough to make the application work again!

Vendor lock-in seems to be a big concern among many people when it comes to cloud architectures heavily relying on managed cloud services, including serverless applications. While I don’t necessarily share those concerns and am not sure if generic abstractions are the right way to go, Pulumi Cloud library can be one direction for the exploration.

The following picture illustrates the choice of the level of abstraction that Pulumi provides:

Working on top of the cloud provider’s API and internal resource provider, you can choose to work with raw components with maximum flexibility, or opt-in for higher-level abstractions. Mix-and-match in the same program is possible too.

Infrastructure as Real Code

Designing applications for the modern cloud means utilizing multiple cloud services which have to be configured to play nicely together. The Infrastructure as Code approach is almost a requirement to keep the management of such applications reliable in a team setting and over the extended period.

Application code and supporting infrastructure become more and more blended, so it’s natural that software developers take the responsibility to define both. The next logical step is to use the same set of languages, tooling, and practices for both software and infrastructure.

Pulumi exposes cloud resources as APIs in several popular general-purpose programming languages. Developers can directly transfer their skills and experience to define, build, compose, and deploy modern cloud-native and serverless applications more efficiently than ever.

Originally published by Mikhail Shilkov at https://medium.freecodecamp.org

Learn more

AWS Certified Solutions Architect - Associate 2019

☞ http://school.edusavecoupon.net/p/r1Bg-3Oh-

AWS Certified Developer - Associate 2019

☞ http://school.edusavecoupon.net/p/SkceeFCkm

Confluent Certified Developer for Apache Kafka (CCDAK)

☞ http://school.edusavecoupon.net/p/xsAdbph1i

Ultimate AWS Certified Solutions Architect Associate 2019

☞ http://school.edusavecoupon.net/p/Nl6MA_lDm

Serverless React with AWS Amplify - The Complete Guide

☞ http://school.edusavecoupon.net/p/aBfQnMI7x

#typescript #aws

What is GEEK

Buddha Community

Adaline  Kulas

Adaline Kulas


Multi-cloud Spending: 8 Tips To Lower Cost

A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.

Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.

By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterprise’s budget as it does not incur huge up-front capital expenditure.

However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.

  • Deactivate underused or unattached resources

Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.

Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.

  • Figure out idle instances

Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.

Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.

  • Deploy monitoring mechanisms

The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.

For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.

#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market

Noemi  Sanford

Noemi Sanford


Developers Say: The Ultimate Cloud Infrastructure Automation Platform

We started a new series of interviews with our everyday users – our developers, which we are calling **Developers Say. **In the first interview, we talked about cloud automation with Marko, a full-stack developer from Vertt. This time, we have @Dzanna_Molly, a DevOps engineer working on We are Laika. She talked to us about how she likes Microtica as a cloud infrastructure automation platform.

You can read more about how Laika created a secure and scalable platform with Microtica here.

What’s the product you’re developing with Microtica?

We are Laika is a recruitment platform where Tech professionals can find the job they love, and companies can acquire top professionals from the Balkans. Tech professionals know everything about the company and the job they apply for, and they stay anonymous before a job offer has been made final. We perform matching talents and companies based on skills, preferences, and competencies through a smart platform.

Which technologies did you use when developing the solution?

The product is a web application, responsive, using React JS for the frontend. The backend is a collection of microservices in NodeJS, with AWS as a cloud provider and Microtica for deployment.

Can you tell us about your background as a developer?

I am a junior backend developer with one year of experience with NodeJS. I’ve also worked with the cloud as an AWS DevOps engineer on previous projects. We are Laika was an exceptional experience as a back-end developer. Here, I learned the architecture of microservices and how to develop backend solutions for the application. Through the use of Microtica on this project, I enhanced my knowledge as I learned a lot about the deployment process, CI/CD, and automation.

#product #cloud automation #cloud infrastructure #cloud infrastructure automation #developer teams #devops #devops automation #software development

Adaline  Kulas

Adaline Kulas


What are the benefits of cloud migration? Reasons you should migrate

The moving of applications, databases and other business elements from the local server to the cloud server called cloud migration. This article will deal with migration techniques, requirement and the benefits of cloud migration.

In simple terms, moving from local to the public cloud server is called cloud migration. Gartner says 17.5% revenue growth as promised in cloud migration and also has a forecast for 2022 as shown in the following image.

#cloud computing services #cloud migration #all #cloud #cloud migration strategy #enterprise cloud migration strategy #business benefits of cloud migration #key benefits of cloud migration #benefits of cloud migration #types of cloud migration

Fredy  Larson

Fredy Larson


How long does it take to develop/build an app?

With more of us using smartphones, the popularity of mobile applications has exploded. In the digital era, the number of people looking for products and services online is growing rapidly. Smartphone owners look for mobile applications that give them quick access to companies’ products and services. As a result, mobile apps provide customers with a lot of benefits in just one device.

Likewise, companies use mobile apps to increase customer loyalty and improve their services. Mobile Developers are in high demand as companies use apps not only to create brand awareness but also to gather information. For that reason, mobile apps are used as tools to collect valuable data from customers to help companies improve their offer.

There are many types of mobile applications, each with its own advantages. For example, native apps perform better, while web apps don’t need to be customized for the platform or operating system (OS). Likewise, hybrid apps provide users with comfortable user experience. However, you may be wondering how long it takes to develop an app.

To give you an idea of how long the app development process takes, here’s a short guide.

App Idea & Research


_Average time spent: two to five weeks _

This is the initial stage and a crucial step in setting the project in the right direction. In this stage, you brainstorm ideas and select the best one. Apart from that, you’ll need to do some research to see if your idea is viable. Remember that coming up with an idea is easy; the hard part is to make it a reality.

All your ideas may seem viable, but you still have to run some tests to keep it as real as possible. For that reason, when Web Developers are building a web app, they analyze the available ideas to see which one is the best match for the targeted audience.

Targeting the right audience is crucial when you are developing an app. It saves time when shaping the app in the right direction as you have a clear set of objectives. Likewise, analyzing how the app affects the market is essential. During the research process, App Developers must gather information about potential competitors and threats. This helps the app owners develop strategies to tackle difficulties that come up after the launch.

The research process can take several weeks, but it determines how successful your app can be. For that reason, you must take your time to know all the weaknesses and strengths of the competitors, possible app strategies, and targeted audience.

The outcomes of this stage are app prototypes and the minimum feasible product.

#android app #frontend #ios app #minimum viable product (mvp) #mobile app development #web development #android app development #app development #app development for ios and android #app development process #ios and android app development #ios app development #stages in app development

Best Cloud Computing (AWS) Development Company

Mobile App Development India engineers, who hold years of experience in building data centers in the cloud. Being AWS partner, our app developer’s offer spearhead integration services for Cloud to Cloud and Business-to-Business for all three models - SaaS, PaaS and IaaS.

Contact: https://www.mobile-app-development-india.com/cloud-computing-aws/

#cloud computing services india #cloud computing services #amazon cloud services india #amazon cloud computing services #cloud application development #amazon cloud app development