Ikram Mihan

Ikram Mihan

1569034441

Introduce to Mutation and Database Access in GraphQL

GraphQL, described as a data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data, allows varying clients use your API and query for just the data they need. It helps solve some issues that some REST services have. Which is over-fetching and under-fetching, and this is a performance issue. In this post, I'll introduce you to GraphQL mutation. We will also

Adding A Database

Before we move to creating GraphQL mutations, I want us to use a database for the existing queries we have in our GraphQL system. We will be using Prisma as a data access layer over MySQL database. For this example we will use Prisma demo server running on Prisma cloud service.

Let's go ahead and define a database schema. Add a new file src/prisma/datamodel.prisma with the following content

type Book {
    id: ID! @id
    title: String!
    pages: Int
    chapters: Int
    authors: [Author!]!
}

type Author {
id: ID! @id
name: String! @unique
books: [Book!]!
}

The above schema represent our data model. Each type will be mapped to a database table. Having ! with a type will make that column in the database to be non-nullable. We also annotated some fields with the @id directive. GraphQL directives are preceded by @ and be used in the schema language or query language.

The @id directive is managed by Prisma, and will mark the field as the primary key in the database and auto-generate global unique ID for that column in the database. The @unique directive will mark that column with a unique constraint in the database. This will also allow us to find authors by their names as you’ll see later.

Next we add a new file src/prisma/prisma.yml which will contain configuration options for Prisma.

# The HTTP endpoint for the demo server on Prisma Cloud
endpoint: “”

Points to the file that contains your datamodel

datamodel: datamodel.prisma

Specifies language & location for the generated Prisma client

generate:

  • generator: javascript-client
    output: ./client

This will be used by the Prisma CLI to configure and update the Prisma server in the cloud, and generate a client API based on the data model. The endpoint option will contain the URL to the Prisma Cloud server. The datamodel option specifies a path to the data model, the generate option specifies we’re using the javascript client generator and it should output the client files to the /client folder. Prisma CLI can generate the client using other generators. There are generators for TypeScript and Go currently. We’re working wit JavaScript so I’ve opted to use the javascript-client generator.

We need the Prisma CLI to deploy our Prisma server and for generating the Prisma client. We’ll install the CLI globally using npm. Run the following command to install the Prisma CLI.

npm install -g prisma

I’m running version 1.34.0 of the CLI. With that installed we now need to deploy our data model. Follow the instructions below to setup the database on Prisma cloud.

  1. Run cd src/prisma && prisma deploy in the command line.
  2. You’ll get prompted to choose how you want to set up the Prisma server. Select Demo Server to continue.
  3. The CLI might want to authenticate your request by opening a browser window for you to login or signup to Prisma. Once you’ve logged in, close the window and go back to the command prompt.
  4. The next prompt requires you to choose a region for the demo server to be hosted on Prisma Cloud. Pick any of you choice and press Enter key to continue.
  5. Now you get asked to choose a name for the service. Enter graphql-intro (or any name of your choosing) and continue.
  6. the next prompt ask for a name to give the current stage of our workflow. Accept the default by pressing the Enter to continue.

The CLI takes those information and the information in prisma.yml to set up the demo server. Once it’s done it updates the file with the endpoint to the Prisma server. It’ll also print in the console information regarding how the database was set up.

With the server set up, the next step is to generate the Prisma client for our data model. The Prisma client is auto-generated based on your data model and gives you API to communicate with the Prisma service. Run the following command to generate our Prisma client.

prisma generate

This command generates the client API to access the demo server we created from earlier. It should dump a couple of files in src/prisma/client.

The next step for us is to connect our GraphQL server to the database server using the Prisma client, and get data from there.

Open src/index.js and import the prisma instance exported from the generated client, and then delete the books variable.

const { GraphQLServer } = require(“graphql-yoga”);
const { prisma } = require(‘./prisma/client’)

…//rest of the code remains untouched

We also need a dependency which is needed to run the Prisma client. Open the command line and run the command npm install prisma-client-lib to install this package.

Using Prisma Client In Resolvers

Now that we have Prisma client generated we’ll need to use that in our resolvers. We’ll pass down the prisma instance using through the context argument that every resolver function gets. 

I mentioned that the context argument is useful for holding contextual information and you can read or write data to it. To work with the prisma client, we’ll write the prisma instance from the generated client to the context object when the GraphQL client is been initialized.

In src/index.js, on line 32, update the initialisation of the GraphQLServer as follows.

const server = new GraphQLServer({
typeDefs,
resolvers,
context: { prisma }
});

We will also update the resolvers to use prisma for resolving queries. Update the Query property in theresolvers variable as follows:

const resolvers = {
Query: {
books: (root, args, context, info) => context.prisma.books(),
book: (root, args, context, info) => context.prisma.book({ id: args.id })
},

}

In those resolvers we’re calling a function on the prisma client instance attached to the context. The function prisma.books() gives us all books in the database, while prisma.book({ id: args.id}) gets us a book based on the passed in id.

Adding Mutation Operations

So far we’re able to fetch data from the GraphQL API but we need a way to update data on the server. GraphQL mutation is a type of operation that allows clients modify data on the server. It is through this operation type that we’re able to add, remove, and update records on the server.

To read data we use GraphQL query operation type, which you learnt from the previous post, and we touched on it in the previous section.

We will add new a new feature to our GraphQL API so that we can add books and authors. We will start by updating the GraphQL schema. Update the typeDefs variable in index.js as follows

const typeDefs = `
type Book {
id: ID!
title: String!
pages: Int
chapters: Int
authors: [Author!]!
}

type Author {
id: ID!
name: String!
books: [Book!]!
}

type Query {
books: [Book!]
book(id: ID!): Book
authors: [Author!]
}

type Mutation {
book(title: String!, authors: [String!]!, pages: Int, chapters: Int): Book!
}
`;

We have updated our GraphQL schema to add new types, Author and Mutation. We added a new field authors which is a list of Author to the Book type, and a new field authors: [Author!] to the root query type. I also changed fields named id to use the ID type.

This is because we’re using that type in our data model and the database will generate global unique identifier for those fields, which won’t match the Int type we’ve been using so far. The root Mutation type defines our mutation operation and we have only one field in it called book, which takes in parameters needed to create a book.

The next step in our process of adding mutation to the API is to implement resolvers for the new fields and types we added. With index.js still open, go to line 30 where the resolvers variable is defined and add a new field Mutation to the object as follows.

const resolvers = {
Mutation: {
book: async (root, args, context, info) => {
let authorsToCreate = [];
let authorsToConnect = [];

  for (const authorName of args.authors) {
    const author = await context.prisma.author({ name: authorName });
    if (author) authorsToConnect.push(author);
    else authorsToCreate.push({ name: authorName });
  }

  return context.prisma.createBook({
    title: args.title,
    pages: args.pages,
    chapters: args.chapters,
    authors: {
      create: authorsToCreate,
      connect: authorsToConnect
    }
  });
}

},
Query: {

},
Book: {

}
};

Just like every other resolver functions, the resolver for books in the root mutation type takes in four arguments and we get the data that needs to be created from the args parameter, and the prisma instance from the context parameter. This resolver is implemented such that it will create the book record in the database, create the author if it does not exist, and then link the two records based on the data relationship defined in our data model.

All this will be done as one transaction in the database. We used what Prisma refers to as nested object writes to modify multiple database records across relations in a single transaction.

While we have the resolver for the root mutation type, we still need to add resolvers for the new Author type and the new fields added to Query and Book type. Update the Book and Query resolvers as follows:

const resolvers = {
Mutation: {

},
Query: {
books: (root, args, context, info) => context.prisma.books(),
book: (root, args, context, info) => context.prisma.book({ id: args.id }),
authors: (root, args, context, info) => context.prisma.authors()
},
Book: {
authors: (parent, args, context) => context.prisma.book({ id: parent.id }).authors()
},
Author: {
books: (parent, args, context) => context.prisma.author({ id: parent.id }).books()
}
};

The authors field resolver of the root query operation is as simple as calling prisma.authors() to get all the authors in the database. You should notice the resolvers for the fields with scalar types in Book and Author was omitted. This is because the GraphQL server can infer how to resolve those fields by matching the result to a property of the same name from the parent parameter. The other relation fields we have cannot be resolved in the same way so we needed to provide an implementation. We call in to Prisma to get this data as you’ve seen.

After all these edits, your index.js should be same as the one below:

const { GraphQLServer } = require(“graphql-yoga”);
const { prisma } = require(“./prisma/client”);

const typeDefs = `
type Book {
id: ID!
title: String!
pages: Int
chapters: Int
authors: [Author!]!
}

type Author {
id: ID!
name: String!
books: [Book!]!
}

type Query {
books: [Book!]
book(id: ID!): Book
authors: [Author!]
}

type Mutation {
book(title: String!, authors: [String!]!, pages: Int, chapters: Int): Book!
}
`;

const resolvers = {
Mutation: {
book: async (root, args, context, info) => {
let authorsToCreate = [];
let authorsToConnect = [];

  for (const authorName of args.authors) {
    const author = await context.prisma.author({ name: authorName });
    if (author) authorsToConnect.push(author);
    else authorsToCreate.push({ name: authorName });
  }

  return context.prisma.createBook({
    title: args.title,
    pages: args.pages,
    chapters: args.chapters,
    authors: {
      create: authorsToCreate,
      connect: authorsToConnect
    }
  });
}

},
Query: {
books: (root, args, context, info) => context.prisma.books(),
book: (root, args, context, info) => context.prisma.book({ id: args.id }),
authors: (root, args, context, info) => context.prisma.authors()
},
Book: {
authors: (parent, args, context) =>
context.prisma.book({ id: parent.id }).authors()
},
Author: {
books: (parent, args, context) =>
context.prisma.author({ id: parent.id }).books()
}
};

const server = new GraphQLServer({
typeDefs,
resolvers,
context: { prisma }
});
server.start(() => console.log(Server is running on http://localhost:4000));

Testing The GraphQL API

So far we have updated our schema and added resolvers to call in to the database server to get data. We have now come to the point where we need to test our API and see if it works as expected. Open the command line and run node src/index.js to start the server. Then open localhost:4000 in your browser. This should bring up the GraphQL Playground. Copy and run the query below to add a book.

mutation{
book(title: “Introduction to GraphQL”, pages: 150, chapters: 12, authors: [“Peter Mbanugo”, “Peter Smith”]){
title
pages
authors{
name
}
}
}

Now that the book is created we can query and see how for the authors in the application.

query{
authors {
name
books {
title
}
}
}

That’s A Wrap!

I introduced you to GraphQL mutation, one fo the three root operation types in GraphQL. We updated our schema with new functionalities which included mutation to add books to the application and using Prisma as our database access layer. I showed you how to work with a data model using the same schema definition language from GraphQL, working with the CLI and generating a Prisma client, and how to read and write data using the Prisma client. Since our data is stored on Prisma cloud, you can access your services and database online on app.prisma.io.

You added new functionalities to our application in this post. This should leave you with the skills to build a GraphQL API to perform CRUD operations. This should let you brag with your friends that you’re now a GraphQL developer. To proof that to you, I want you to add a new set of functionalities to your API as follows:

  1. Add a query to find authors by their name.
  2. Allow books to have publishers. This will have you add a new type to the schema. You should be able to independently add publishers and query for all the books belonging to a publisher.

If you get stuck or want me to have a look at your solution. Hope this tutorial will surely help and you!

Further reading

☞ The Modern GraphQL Bootcamp (Advanced Node.js)

☞ An introduction GraphQL with AWS AppSync

☞ GraphQL API with AWS and Use with React

☞ GraphQL Tutorial: Understanding Spring Data JPA/SpringBoot

☞ Getting started with GraphQL and TypeScript

An Intro to GraphQL API

Originally published on dev.to

#graphql #javascript #node-js #api #web-development

What is GEEK

Buddha Community

Introduce to Mutation and Database Access in GraphQL
Ruth  Nabimanya

Ruth Nabimanya

1620633584

System Databases in SQL Server

Introduction

In SSMS, we many of may noticed System Databases under the Database Folder. But how many of us knows its purpose?. In this article lets discuss about the System Databases in SQL Server.

System Database

Fig. 1 System Databases

There are five system databases, these databases are created while installing SQL Server.

  • Master
  • Model
  • MSDB
  • Tempdb
  • Resource
Master
  • This database contains all the System level Information in SQL Server. The Information in form of Meta data.
  • Because of this master database, we are able to access the SQL Server (On premise SQL Server)
Model
  • This database is used as a template for new databases.
  • Whenever a new database is created, initially a copy of model database is what created as new database.
MSDB
  • This database is where a service called SQL Server Agent stores its data.
  • SQL server Agent is in charge of automation, which includes entities such as jobs, schedules, and alerts.
TempDB
  • The Tempdb is where SQL Server stores temporary data such as work tables, sort space, row versioning information and etc.
  • User can create their own version of temporary tables and those are stored in Tempdb.
  • But this database is destroyed and recreated every time when we restart the instance of SQL Server.
Resource
  • The resource database is a hidden, read only database that holds the definitions of all system objects.
  • When we query system object in a database, they appear to reside in the sys schema of the local database, but in actually their definitions reside in the resource db.

#sql server #master system database #model system database #msdb system database #sql server system databases #ssms #system database #system databases in sql server #tempdb system database

Django-allauth: A simple Boilerplate to Setup Authentication

Django-Authentication 

A simple Boilerplate to Setup Authentication using Django-allauth, with a custom template for login and registration using django-crispy-forms.

Getting Started

Prerequisites

  • Python 3.8.6 or higher

Project setup

# clone the repo
$ git clone https://github.com/yezz123/Django-Authentication

# move to the project folder
$ cd Django-Authentication

Creating virtual environment

  • Create a virtual environment for this project:
# creating pipenv environment for python 3
$ virtualenv venv

# activating the pipenv environment
$ cd venv/bin #windows environment you activate from Scripts folder

# if you have multiple python 3 versions installed then
$ source ./activate

Configured Enviromment

Environment variables

SECRET_KEY = #random string
DEBUG = #True or False
ALLOWED_HOSTS = #localhost
DATABASE_NAME = #database name (You can just use the default if you want to use SQLite)
DATABASE_USER = #database user for postgres
DATABASE_PASSWORD = #database password for postgres
DATABASE_HOST = #database host for postgres
DATABASE_PORT = #database port for postgres
ACCOUNT_EMAIL_VERIFICATION = #mandatory or optional
EMAIL_BACKEND = #email backend
EMAIL_HOST = #email host
EMAIL_HOST_PASSWORD = #email host password
EMAIL_USE_TLS = # if your email use tls
EMAIL_PORT = #email port

change all the environment variables in the .env.sample and don't forget to rename it to .env.

Run the project

After Setup the environment, you can run the project using the Makefile provided in the project folder.

help:
 @echo "Targets:"
 @echo "    make install" #install requirements
 @echo "    make makemigrations" #prepare migrations
 @echo "    make migrations" #migrate database
 @echo "    make createsuperuser" #create superuser
 @echo "    make run_server" #run the server
 @echo "    make lint" #lint the code using black
 @echo "    make test" #run the tests using Pytest

Preconfigured Packages

Includes preconfigured packages to kick start Django-Authentication by just setting appropriate configuration.

PackageUsage
django-allauthIntegrated set of Django applications addressing authentication, registration, account management as well as 3rd party (social) account authentication.
django-crispy-formsdjango-crispy-forms provides you with a crispy filter and {% crispy %} tag that will let you control the rendering behavior of your Django forms in a very elegant and DRY way.

Contributing

  • Django-Authentication is a simple project, so you can contribute to it by just adding your code to the project to improve it.
  • If you have any questions, please feel free to open an issue or create a pull request.

Download Details:
Author: yezz123
Source Code: https://github.com/yezz123/Django-Authentication
License: MIT License

#django #python 

Siphiwe  Nair

Siphiwe Nair

1625133780

SingleStore: The One Stop Shop For Everything Data

  • SingleStore works toward helping businesses embrace digital innovation by operationalising “all data through one platform for all the moments that matter”

The pandemic has brought a period of transformation across businesses globally, pushing data and analytics to the forefront of decision making. Starting from enabling advanced data-driven operations to creating intelligent workflows, enterprise leaders have been looking to transform every part of their organisation.

SingleStore is one of the leading companies in the world, offering a unified database to facilitate fast analytics for organisations looking to embrace diverse data and accelerate their innovations. It provides an SQL platform to help companies aggregate, manage, and use the vast trove of data distributed across silos in multiple clouds and on-premise environments.

**Your expertise needed! **Fill up our quick Survey

#featured #data analytics #data warehouse augmentation #database #database management #fast analytics #memsql #modern database #modernising data platforms #one stop shop for data #singlestore #singlestore data analytics #singlestore database #singlestore one stop shop for data #singlestore unified database #sql #sql database

Ruth  Nabimanya

Ruth Nabimanya

1620655800

What Is a Smart Database Proxy?

You can change how your database systems behave without changing either clients or servers, by defining your logic in a smart database proxy.

Smart database proxies may not be familiar to many people, and it’s a shame because they can solve many difficult problems elegantly. This article explains what they are, what they do, and when they are useful.

A Quick Comparison

Why Would You Want to Do That?

Two Simple Examples

A Variety of Uses

What About Performance?

But That’s Not the Real Data!

A State of Mind

#database #database access #database application development #proxies #proxy server #database infrastructure

Brain  Crist

Brain Crist

1600275600

How to Build a Pokedex React App with a Slash GraphQL Backend

Frontend developers want interactions with the backends of their web applications to be as painless as possible. Requesting data from the database or making updates to records stored in the database should be simple so that frontend developer can focus on what they do best: creating beautiful and intuitive user interfaces.

GraphQL makes working with databases easy. Rather than relying on backend developers to create specific API endpoints that return pre-selected data fields when querying the database, frontend developers can make simple requests to the backend and retrieve the exact data that they need—no more, no less. This level of flexibility is one reason why GraphQL is so appealing.

Even better, you can use a _hosted _GraphQL backend—Slash GraphQL (by Dgraph). This service is brand new and was publicly released on September 10, 2020. With Slash GraphQL, I can create a new backend endpoint, specify the schema I want for my graph database, and—voila!—be up and running in just a few steps.

The beauty of a hosted backend is that you don’t need to manage your own backend infrastructure, create and manage your own database, or create API endpoints. All of that is taken care of for you.

In this article, we’re going to walk through some of the basic setup for Slash GraphQL and then take a look at how I built a Pokémon Pokédex app with React and Slash GraphQL in just a few hours!

#development #web developement #databases #graph databases #reactjs #database design #database architecture #pokemon #graph databases in the cloud #dgraph