How to build a GraphQL API with the Serverless framework

How to build Scalable APIs using GraphQL and Serverless

How to build Scalable APIs using GraphQL and Serverless

GraphQL and Serverless Tutorial: Learn how to build Scalable APIs using GraphQL and Serverless

GraphQL is a different way to connect your client applications to your backend. In general, we are used to REST, where the client connects to the backend via a defined endpoint. When connecting to an endpoint, the client needs to specify headers, an HTTP method (GET, POST, DELETE, etc.), a body, and all kinds of parameters. Usually, he needs to be familiar with all the information beforehand to be able to request the information from the server. The client might end up getting a lot of information back that isn’t needed (over fetch) and often forces him to do a lot of calls to the backend to populate a view since one request only brings part of the information needed (under fetch).

GraphQL has some important characteristics. Between the client and the server, a contract is signed, called the GraphQL schema. Here, all possible types of operations that the client can perform on the backend are defined. The client can call an operation with one request, which will return multiple types if needed so that there is no under fetch of information. The client can also specify attributes for the types it wants so that there is no over fetch of information.

GraphQL is implemented between the client and the data sources, making GraphQL a single entry point to the backend. The data sources can be anything from NoSQL databases, relational databases, and HTTP servers to whatever returns data. To connect a GraphQL implementation to the data sources, you need to write resolvers. Resolvers are a very important part of your GraphQL application, as they are the ones that translate GraphQL requests or responses to whatever the data sources understand. For example, if the data source is a relational database, the resolver will need to know how to transform a GraphQL query into a SELECT operation and then translate whatever the relational database returns into a GraphQL response.

Current trends in software and backend architecture have been evolving towards a more loosely coupled more granular design. I am sure most of you have heard of microservice based architectures. The latest development on that front in the past couple of years has been the advent of Serverless which allows you to run applications in very cost effective ephemeral services. This is why it is important to have a proper gateway for your API that is able to route all your requests to the designated endpoint.

GraphQL stands out in that respect as being a mature open sourced standard started at Facebook. We will first have a look at how we set up our own GraphQL server locally, then we will explore the Query language and schema definitions it provides which allows you essentially query your mesh of services from a single point of entry. The beauty of that is it will notify you early if any of your endpoints is misbehaving or the schemas are out of date by erring out. Another advantage of this is that it allows for your API documentation to be a real time process and it will give you what one may call an API playground where you can query and explore your API.

After we explore our Serverless API we will have a look at the more advanced features and standards around mutators and resolvers and then we will close by going all in, full Serverless and deploy our GraphQL server to a function in the cloud.

Speakers:
Simona Cotin - Senior Cloud Advocate, Microsoft
Matteo Collina - Technical Director, NearForm

Source Code: https://mybuild.techcommunity.microsoft.com/VideoDownloader/Download-BuildResources.zip

Building Bliss- Serverless Fullstack React with Prisma 2 and GraphQL

Building Bliss- Serverless Fullstack React with Prisma 2 and GraphQL

In this post, we will show how you can deploy a totally serverless stack using Prisma 2 and Next.js.

This type of solution has only been recently available and while it is still in beta, it really represents a full stack developer's paradise because you can develop an app, deploy it, forget about worrying about any of the DevOps particulars and be confident that it will work regardless of load.

Benefits:
  • One command to deploy the entire stack (Now)
  • Infinitely scalable, pay for what you use (lambda functions)
  • No servers to maintain (lambda functions)
  • All the advantages of React (composability, reusability and strong community support)
  • Server-side rendering for SEO (Next.js)
  • Correctly rendered social media link shares in Facebook and Twitter (Next.js)
  • Easy to evolve api (GraphQL)
  • One Schema to maintain for the entire stack (Prisma 2)
  • Secure secret management (Now)
  • Easy to set up development environment with hot code reloading (Docker)
  • Strongly typed (GraphQL and Typescript) that is autogenerated when possible (graphql-gen)

Before you start, you should go ahead and set up an RDS instance and configured like our previous blog post.

Videos:

I. Install Dependencies

II. Add Environmental Parameters

III. Configure the Backend

IV. Configure the Now Service

V. Set up Now Secrets and Deploy!

We will pick up from the example from our multi-part blog series [1][2][3]. If you aren't interested in following along from the start, you can start by checking out the repo from the now-serverless-start tag:

git clone https://github.com/CaptainChemist/blog-prisma2
git fetch && git fetch --tags
git checkout now-serverless-start
I. Install and clean up dependencies

Upgrade to Next v9

In the frontend/package.json make sure that next has a version of "9.02" or greater. Previously we were using a canary version of 8.1.1 for typescript support, but since the post version 9 of next was released so we want to make sure we can take advantage of all the latest goodies.

Install webpack to the frontend

As a precaution, you should install webpack to the frontend folder. I've seen inconsistent behavior with now where if webpack is not installed, sometimes the deploy will fail saying that it needs webpack. When I read online it sounds like it shouldn't be required so this is likely a bug, but it can't hurt to add it:

npm install --save-dev webpack

Remove the main block from package.json and frontend/package.json

When we generated our package.json files, it auto-populated the main field. Since we are not using this feature and don't even have an index.js file in either folder, we should go ahead and remove them. In frontend/package.json go ahead and remove line 5. We didn't use it previously and it has the potential to confuse the now service.

"main": "index.js",

Also, do the same in the package.json in the root folder.

Install Prisma2 to the backend

Although we globally install prisma2 in our docker containers, we need to now add it to our backend package.json file so that when we use the now service it will be available during the build step up in AWS. Navigate to the backend folder and install prisma2:

npm install --save-dev prisma2

Install Zeit Now

We should install now globally so that we will be able to run it from the command line:

npm install -g now
II. Add Environmental Variables

Add a .env file to the root of your project. Add the following variables which we will use across our docker environment.

MYSQL_URL=mysql://root:[email protected]:3306/prisma
BACKEND_URL=http://backend:4000/graphql
FRONTEND_URL=http://localhost:3000

Modify the docker-compose.yml file to inject these new variables into our docker containers. This is what the updated file looks like:

docker-compose.yml

version: '3.7'
services:
  mysql:
    container_name: mysql
    ports:
      - '3306:3306'
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: prisma
      MYSQL_ROOT_PASSWORD: prisma
    volumes:
      - mysql:/var/lib/mysql
  prisma:
    links:
      - mysql
    depends_on:
      - mysql
    container_name: prisma
    ports:
      - '5555:5555'
    build:
      context: backend/prisma
      dockerfile: Dockerfile
    environment:
      MYSQL_URL: ${MYSQL_URL}
    volumes:
      - /app/prisma
  backend:
    links:
      - mysql
    depends_on:
      - mysql
      - prisma
    container_name: backend
    ports:
      - '4000:4000'
    build:
      context: backend
      dockerfile: Dockerfile
      args:
        - MYSQL_URL=${MYSQL_URL}
    environment:
      MYSQL_URL: ${MYSQL_URL}
      FRONTEND_URL: ${FRONTEND_URL}
    volumes:
      - ./backend:/app
      - /app/node_modules
      - /app/prisma
  frontend:
    container_name: frontend
    ports:
      - '3000:3000'
    build:
      context: frontend
      dockerfile: Dockerfile
    environment:
      BACKEND_URL: ${BACKEND_URL}
    volumes:
      - ./frontend:/app
      - /app/node_modules
      - /app/.next

volumes: #define our mysql volume used above
mysql:

Let's take a look at the parts that were changed, below are the parts snipped out that we added to the above file:

prisma:
environment:
MYSQL_URL: ${MYSQL_URL}

..more lines

backend:
build:
context: backend
dockerfile: Dockerfile
args:
- MYSQL_URL=${MYSQL_URL}
environment:
MYSQL_URL: ${MYSQL_URL}
FRONTEND_URL: ${FRONTEND_URL}

..more lines

frontend:
environment:
BACKEND_URL: ${BACKEND_URL}

We added environment blocks to the prisma studio, backend, and frontend containers. Since we have the .env file, any variables that we define in the .env file, such as VAR1=my-variable, we can call it in the yml as ${VAR1} and that will be like we used the my-variable string directly in that spot of the yml file.

Dynamically set Backend url on the frontend

We need to set the uri that the frontend connects to dynamically instead of hardcoding it. In the frontend/utils/init-apollo.js we previously had this line which would connect to localhost if the request came from a user or from the backend if it came from the next.js server:

uri: isBrowser ? 'http://localhost:4000' : 'http://backend:4000', // Server URL (must be absolute)

We need to still keep track of whether we are in the browser or server in the docker environment. In addition, though, we need to check whether we are in a docker environment or whether we are deployed via now into a lambda function.

We can access environment variables by using the process.env.ENVIRONMENTAL_VARIABLE. We check if the url matches our local environment url and if so, we know that we are in a docker environment. Now our logic is that if we are in a docker environment and the browser is making the request, we return the localhost, otherwise we pass the BACKEND_URL as the uri.

frontend/utils/init-apollo.js

function create(initialState) {
// Check out https://github.com/zeit/next.js/pull/4611 if you want to use the AWSAppSyncClient
const isBrowser = typeof window !== 'undefined'
const isDocker = process.env.BACKEND_URL === 'http://backend:4000/graphql'
return new ApolloClient({
connectToDevTools: isBrowser,
ssrMode: !isBrowser, // Disables forceFetch on the server (so queries are only run once)
link: new HttpLink({
uri:
isDocker && isBrowser
? 'http://localhost:4000/graphql'
: process.env.BACKEND_URL,
credentials: 'same-origin', // Additional fetch() options like credentials or headers
// Use fetch() polyfill on the server
fetch: !isBrowser && fetch,
}),
cache: new InMemoryCache().restore(initialState || {}),
})
}

Now that should really be all that we need to do, but since Next.js is both rendered on the server and in the client, we won't have access to server environmental variables unless we take one more step. We need to expose the variable in our frontend/next.config.js file:

frontend/next.config.js

const withCSS = require('@zeit/next-css')

module.exports = withCSS({
target: 'serverless',
env: {
BACKEND_URL: process.env.BACKEND_URL,
},
})

Note that due to how exactly Next.js handles process.env, you cannot destructure variables off of it. So the line below will not work, we need to use the entire process.env.BACKEND_URL variable.

const { BACKEND_URL } = process.env // NO!
III. Configure our backend server

Update the backend server to the /graphql backend and configure CORS

We updated the url above to the /graphql endpoint for the backend server. We are doing this because in now we will deploy our backend graphql server to ourdomain.com/graphql. We need to make this change in our backend/src/index.ts so that the server will be running at the /graphqlendpoint instead of /.

In addition, while we are here, we will disable subscriptions and enable CORS. CORS stands for cross origin resource sharing and it tells the backend server which frontend servers it should accept requests from. This ensures that if someone else stood up a frontend next server that pointed to our backend server that all requests would fail. We need this because you could imagine how damaging this could potentially be if someone bought a domain crazyamazondeals.com (I'm just making this up) and pointed their frontend server to the real backend server of amazon's shopping portal. This would allow a fake amazon frontend to gather all sorts of customer information while still sending real requests to amazon's actual backend server. Yikes!

In order to enable CORS we will pass in our frontend url. We will also enable credentials for future authentication-related purposes.

backend/src/index.ts

server.start(
{
endpoint: '/graphql',
playground: '/graphql',
subscriptions: false,
cors: {
credentials: true,
origin: process.env.FRONTEND_URL,
},
},
() => console.log(🚀 Server ready)
)

Update the backend/prisma/project.prisma file to use environmental variables and set our platform.

We can use the env("MYSQL_URL") which will take our MYSQL_URLenvironmental variable. Starting with prisma preview-3+ we need to specify which platforms that we plan to use with prisma2. We can use "native" for our docker work, but we need to use "linux-glibc-libssl1.0.2" for Zeit Now.

backend/prisma/project.prisma

datasource db {
provider = "mysql"
url = env("MYSQL_URL")
}

generator photon {
provider = "photonjs"
platforms = ["native", "linux-glibc-libssl1.0.2"]
}
// Rest of file

Update the backend/Dockerfile to pass the environmental variable into the prisma2 generate. We first have to define a docker argument using ARG named MYSQL_URL. Then, we take the MYSQL_URLenvironmental variable and assign it to this newly created ARG.

We need the MYSQL_URL environment variable so that our url from the prisma file gets evaluated properly.

backend/Dockerfile

FROM node:10.16.0
RUN npm install -g --unsafe-perm prisma2

RUN mkdir /app
WORKDIR /app

COPY package*.json ./
COPY prisma ./prisma/

ARG MYSQL_URL
ENV MYSQL_URL "$MYSQL_URL"

RUN npm install
RUN prisma2 generate

CMD ["npm", "start" ]

Note that the only reason we have access to the $MYSQL_URL variable in this Dockerfile is due to an args block that we previously added to the docker-compose.yml file. Adding variables to the environment block of docker-compose is only accessible during the runtime of the containers, not the building step which is where we are at when the Dockerfile is being executed.

backend:
build:
context: backend
dockerfile: Dockerfile
args:
- MYSQL_URL=${MYSQL_URL}
IV. Add our Now Configuration

Create now secrets

Locally, we have been using the .env file to store our secrets. Although we commit that file to our repo, the only reason why we can do that is because there are no sensitive environmental variables there. Ensure that if you ever add real secrets to that file, such as a stripe key, you need to never commit that to github or else you risk them being compromised!

For production, we need a more secure way to store secrets. Now provides a nice way to do this:

now secret add my_secret my_value

Now will encrypt and store these secrets on their servers and when we upload our app we can use them but we won't be able to read them out even if we try to be sneaky and read it out using console.logs. We need to create variables for the following variables that were in our .env file:

MYSQL_URL=mysql://user:[email protected]:3306/prisma
BACKEND_URL=https://your-now-url.sh/graphql
FRONTEND_URL=https://your-now-url

Note that by default your-now-url will be yourProjecFoldername.yourNowUsername.now.sh but you can always skip this step for now, get to Step V of this tutorial, deploy your site and then look at where it deploys to because it will be the last line of the console output. Then you come back to this step and add the now secrets and redeploy the site.

Add a now.json file to the root directory

We need to create a now.json file which will dictate details about how we should deploy our site. The first part of it has environmental variables for both the build and the runtime. We will be using secrets that we created in the previous step by using the @our-secret-name. If you forget what names you used, you can always type now secrets ls and you will get the names of the secrets (but critically not the secrets themselves).

Next we have to define our build steps. In our case we have to build both our nextjs application and our graphql-yoga server. The nextjs is built using a specially designed @now/next builder and we can just point it to our next.config.js file which is in our frontend folder. Our other build will use the index.ts file in our backend/src directory and the builder is smart enough to compile the code down into javascript and deploy it to a lambda function.

Finally, we have to define our routes. The backend server will end up at the /graphql endpoint while the frontend directory will use everything else. This ensures that any page we go to under ourdomain.com will be forwarded onto the nextjs server except the /graphql endpoint.

now.json

{
"version": 2,
"build": {
"env": {
"MYSQL_URL": "@mysql_url",
"BACKEND_URL": "@backend_url",
"FRONTEND_URL": "@frontend_url"
}
},
"env": {
"MYSQL_URL": "@mysql_url",
"BACKEND_URL": "@backend_url",
"FRONTEND_URL": "@frontend_url"
},
"builds": [
{
"src": "frontend/next.config.js",
"use": "@now/next"
},
{
"src": "backend/src/index.ts",
"use": "@now/node",
"config": { "maxLambdaSize": "20mb" }
}
],
"routes": [
{ "src": "/graphql", "dest": "/backend/src/index.ts" },
{
"src": "/(.*)",
"dest": "/frontend/$1",
"headers": {
"x-request-path": "$1"
}
}
]
}

Add a .nowignore file to the root directory

Finally, we can add our ignore file which will tell now which things it shouldn't bother to upload.

.nowignore

**/node_modules
.next
Dockerfile
README.MD
V. Deploy our now full stack site

This part is easy. Simply type now from the root folder and let it fly!

Thanks for reading. If you liked this post, share it with all of your programming buddies!

Further reading

☞ The Complete JavaScript Course 2019: Build Real Projects!

☞  Show Suggestions on Typing using Javascript

☞ JavaScript Bootcamp - Build Real World Applications

☞ The Web Developer Bootcamp

☞ JavaScript Programming Tutorial - Full JavaScript Course for Beginners

☞ Best JavaScript Frameworks, Libraries and Tools to Use in 2019

☞ What JavaScript Framework You Should Learn to Get a Job in 2019?

☞ Best JavaScript Frameworks, Libraries and Tools to Use in 2019

☞ Microfrontends — Connecting JavaScript frameworks together (React, Angular, Vue etc)

☞ Do we still need JavaScript frameworks?


Originally published on https://www.codemochi.com

Creating a simple GraphQL Application with AWS AppSync

Creating a simple GraphQL Application with AWS AppSync

In this post, we’ll get started with AppSync by creating a simple GraphQL application with two data sources: DynamoDB and AWS Lambda.

Originally published by Ran Ribenzaft at https://epsagon.com
A Beginner’s Guide to AWS AppSync

The first, let’s learn a bit more about GraphQL:

GraphQL is a specification for which there are many different kinds of implementations in the market. AppSync is a serverless implementation of GraphQL by AWS and is a managed GraphQL platform. AppSync can be a way to replace the API Gateway + AWS Lambda pattern for connecting clients to your serverless backends. 

GraphQL – Some Background

GraphQL is a different way to connect your client applications to your backend. In general, we are used to REST, where the client connects to the backend via a defined endpoint. When connecting to an endpoint, the client needs to specify headers, an HTTP method (GET, POST, DELETE, etc.), a body, and all kinds of parameters. Usually, he needs to be familiar with all the information beforehand to be able to request the information from the server. The client might end up getting a lot of information back that isn’t needed (over fetch) and often forces him to do a lot of calls to the backend to populate a view since one request only brings part of the information needed (under fetch).

GraphQL has some important characteristics. Between the client and the server, a contract is signed, called the GraphQL schema. Here, all possible types of operations that the client can perform on the backend are defined. The client can call an operation with one request, which will return multiple types if needed so that there is no under fetch of information. The client can also specify attributes for the types it wants so that there is no over fetch of information. 

GraphQL is implemented between the client and the data sources, making GraphQL a single entry point to the backend. The data sources can be anything from NoSQL databases, relational databases, and HTTP servers to whatever returns data. To connect a GraphQL implementation to the data sources, you need to write resolvers. Resolvers are a very important part of your GraphQL application, as they are the ones that translate GraphQL requests or responses to whatever the data sources understand. For example, if the data source is a relational database, the resolver will need to know how to transform a GraphQL query into a SELECT operation and then translate whatever the relational database returns into a GraphQl response.

Introduction to AppSync

AppSync can do everything GraphQL specifies and also has additional features that come out of the box. For example, it supports authentication and authorization. This enables to filter the data that you return and the operations that the clients can perform depending on which user is signed in. AppSync supports Cognito, API Key, IAM permissions, and Open ID Connect and even provides out-of-the-box support for DynamoDB, AWS Lambda, Elasticsearch, HTTP, and RDS as data sources.

Appsync also has support for real-time operations. This means that if a data source is updated, the clients that are subscribed to these operations will get updated as well. This way, the clients don’t need to ping the GraphQL service to check if there is new data available. Also, there is offline support, meaning that if the client application goes offline and modifies the data when it comes online, the data will get synced with the data sources automatically. 

Read about all of AppSync’s features in this document and understand the pricing better.

Getting Started With AppSync

As mentioned, we’ll be using DynamoDB and AWS Lambda to create our GraphQL application. The application will store image metadata in DynamoDB. The images will be stored in S3, and we’ll save an identifier in DynamoDB for each image and the S3 path. Then, when we want to retrieve an image, it will fetch the S3 path with the image identifier. Using AWS Lambda, we will then get the signed URL for that image so that the client can see it.

AppSync Architecture

When getting started with a new serverless application, which covers some basic considerations for a beginner serverless developer. We’re going to start with a Serverless Framework and implement a serverless framework plugin to use AppSync easily in our project. Before starting this tutorial, you will also have to make sure your computer has an AWS account and Serverless Framework installed and configured. Then, you can go ahead and create a new Serverless Framework project. For that, we need to run this command inside an empty directory: 

$ sls create --template aws-nodejs --name appsync-intro

This will create the boilerplate for our project–the serverless.yml and handler.js files.

Now go to the serverless.yml file and edit it like this:

service: appsync-intro

plugins:

  • serverless-appsync-plugin
  • serverless-pseudo-parameters

provider:
name: aws
runtime: nodejs10.x

functions:
graphql:
handler: handler.graphql

For this tutorial, you will be using the serverless-appsync-plugin and the serverless-pseudo-parameters. For the plugins to work, you need to install them into your project. In order to do that, you need to type this into your terminal:

$ npm install --save serverless-appsync-plugin
$ npm install --save serverless-pseudo-parameters

Now you can get started setting up the AppSync application. Write the following code into the serverless.yml after the provider and before the functions:

custom:
IMAGE_TABLE: appsync-intro-image-table
BUCKET_NAME: <your-bucket-name>
appSync:
name: appsync-intro
authenticationType: API_KEY
mappingTemplates:
- dataSource: Images
type: Mutation
field: saveImage
request: saveImage-request-mapping-template.vtl
response: saveImage-response-mapping-template.vtl
- dataSource: lambdaDatasource
type: Query
field: getImageSignedPath
request: getImageSignedPath-request-mapping-template.vtl
response: getImageSignedPath-response-mapping-template.vtl
dataSources:
- type: AMAZON_DYNAMODB
name: Images
description: 'Table containing the metadata of the images'
config:
tableName: { Ref: ImageTable }
iamRoleStatements:
- Effect: "Allow"
Action:
- "dynamodb:GetItem"
- "dynamodb:PutItem"
Resource:
- "arn:aws:dynamodb:#{AWS::Region}::table/${self:custom.IMAGE_TABLE}"
- "arn:aws:dynamodb:#{AWS::Region}:
:table/${self:custom.IMAGE_TABLE}/"
- type: AWS_LAMBDA
name: lambdaDatasource
description: 'Lambda DataSource'
config:
functionName: graphql
iamRoleStatements: # custom IAM Role statements for this DataSource. Ignored if serviceRoleArn is present. Auto-generated if both serviceRoleArn and iamRoleStatements are omitted
- Effect: "Allow"
Action:
- "lambda:invokeFunction"
Resource:
- "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-dev-graphql"
- "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-dev-graphql:
"

The first two lines are just defining environmental variables with the name of the bucket and the DynamoDB table, which will be used to store the metadata.

Now, we can start configuring the AppSync application with the following details:

  • Name: The name of the AppSync application.
  • Authentication Type: How we want to secure the AppSync app. In this case, you will use the simplest way of securing the app–with an API Key. 
  • Mapping Templates: To define all the different resolvers for your application. For each operation or field defined in your schema.graphql, you need to specify the resolvers–one resolver for each request and one resolver for each response. Resolvers in AppSync are written in VTL (velocity template language).
  • Data sources: To define all the different data sources for your application. You have two data sources in this application: DynamoDB and Lambda. In this section, you will specify the type of data source, the resource name or ARN (Amazon resource name), and the permissions that you will give AppSync to operate over this data source.

After everything is defined above, we’ll create the Table and also give permissions to the function to fetch items from the Table and to retrieve items from the S3 bucket. For this, we’ll need to add the following code after the provider and before the custom property in the serverless.yml:

iamRoleStatements:

  • Effect: "Allow"
    Action:
    • "s3:ListBucket"
    • "s3:GetObject"
      Resource: "arn:aws:s3:::${self:custom.BUCKET_NAME}/*"
  • Effect: "Allow"
    Action:
    • "dynamodb:GetItem"
      Resource: "arn:aws:dynamodb:#{AWS::Region}:*:table/${self:custom.IMAGE_TABLE}"

And then at the end of the file, add: 

resources:
Resources:
#Image table
ImageTable:
Type: "AWS::DynamoDB::Table"
Properties:
KeySchema:
  • AttributeName: name
    KeyType: HASH
    AttributeDefinitions:
  • AttributeName: name
    AttributeType: S
    BillingMode: PAY_PER_REQUEST
    TableName: ${self:custom.IMAGE_TABLE}

 

Your serverless.yml is almost ready, but we still need to define the schema.graphql file for your application. So, create a new file and name it “schema.graphql.” There we can add: 

type Query {
getImageSignedPath(imageName: String!): String!
}
type Mutation {
saveImage(name: String!, s3Path: String!): Image!
}
type Image {
name: String!
s3Path: String!
}
schema {
query: Query
mutation: Mutation
}

This will add two operations and one type to your application. The type is called Image and has a name and S3Path that will refer to the place where it is stored in AWS S3. The query is the operation without side effects and is called getImageSignedPath. When given an image name, the query returns a signed URL for the image. The mutation is the operation that performs side effects in your backend and is called saveImage. When given an image name and S3Path, the mutation stores the image metadata in the DynamoDB image table.  

Next, we’ll create the resolver files by specifying the four different resolvers in your serverless.yml–one each for the request and the response of the operation getImageSignedPath as well as for the request and the response of the operation saveImage. The resolvers are very simple, and you can find the files in this link

Finally, write the business logic for your function. Go ahead and write the following code in your handler.js:

'use strict';
const AWS = require('aws-sdk');
const s3 = new AWS.S3({signatureVersion: 'v4'});
const dynamo = new AWS.DynamoDB.DocumentClient();
module.exports.graphql = async (event) => {
switch (event.field) {
case 'getImageSignedPath': {
const bucket = <your own bucket name>;
const imageName = event.arguments.imageName;
const key = await getImageS3Path(imageName);
return signURL(bucket, key);
}
default: {
return Unknown field, unable to resolve ${event.field}, null;
}
}
};
function signURL(bucket, key) {
const params = {'Bucket': bucket, 'Key': key};
return s3.getSignedUrl('getObject', params);
}
async function getImageS3Path(imageName) {
const params = {
Key: {
name: imageName
},
TableName:'appsync-intro-image-table'
};
return dynamo.get(params).promise().then(result => {
return result.Item.s3Path;
});
}

This function is a very simple one that, given an image name that is passed in the arguments of the request, returns a signed URL. It first fetches the image S3 path from the image metadata table. Then, when it gets the path, it uses the AWS SDK S3 module to retrieve the signed URL and finally returns that URL as a response. 

You can find all the code for this application here.

Testing

To test this application, simply deploy it from the terminal: 

$ sls deploy

This will deploy and create all the resources you need–DynamoDB table, AppSync application, and function. Then, in your AWS account, simply create an S3 bucket with the name that you specified in the environmental variables. Here, you can store some images. Now go to the AppSync console, then to Queries. There you can create some queries to test the application.

For example, if you stored an image with the path image1.png, you can type this mutation there.

AppSync Queries Console

If you want to fetch the signed path, you can type this query:

Query Image Path

Integrating the API With a Client

If you want to use this API in a client application, use the Amplify Library, which builds cloud-native applications. Via this library, you can easily connect with the API and perform queries and mutations on it.

Monitoring With Epsagon

Epsagon’s tracing captures AppSync triggers, as seen in the image below. Get started in less than 5 minutes.

Epsagon tracing AppSync Triggers

Conclusion

AppSync is a different way to create APIs than REST. By using GraphQL and a managed platform, you can ensure extremely fast development for your client backends. You can also sync multiple entry points in the backend instead of giving the responsibility for orchestrating all backend calls to the client application. AppSync can be a great tool when you want to implement a customer-facing web app that has a complex model. You can also easily define the schema and then connect it to the different data sources.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading AWS and GraphQL 

AWS Certified Solution Architect Associate

AWS Lambda vs. Azure Functions vs. Google Functions

Running TensorFlow on AWS Lambda using Serverless

Deploy Docker Containers With AWS CodePipeline

A Complete Guide on Deploying a Node app to AWS with Docker

Create and Deploy AWS and AWS Lambda using Serverless Framework

Introduction To AWS Lambda

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

GraphQL with React: The Complete Developers Guide

How to create a simple CRUD App using GraphQL and Node.js

A Beginner’s Guide to GraphQL

Node, Express, PostgreSQL, Vue 2 and GraphQL CRUD Web App

Developing and Securing GraphQL APIs with Laravel