1560832411
GraphQL has become a buzzword over the last few years after Facebook made it open-source. I have tried GraphQL with the Node.js, and I agree with all the buzz about the advantages and simplicity of GraphQL.
So what is GraphQL? This is what the official GraphQL definition says:
GraphQL is a query language for APIs and runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
I recently switched to Golang for a new project I’m working on (from Node.js) and I decided to try GraphQL with it. There are not many library options with Golang but I have tried it with Thunder, graphql, graphql-go, and gqlgen. And I have to say that gqlgen is winning among all the libraries I have tried.
gqlgen is still in beta with latest version 0.7.2 at the time of writing this article, and it’s rapidly evolving. You can find their road-map here. And now 99designs is officially sponsoring them, so we will see even better development speed for this awesome open source project. vektah and neelance are major contributors, and neelance also wrote graphql-go.
So let’s dive into the library semantics assuming you have basic GraphQL knowledge.
As their headline states,
GraphQL is a query language for APIs and runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
I think this is the most promising thing about the library: you will never seemap[string]interface{}
here, as it uses a strictly typed approach.
Apart from that, it uses a Schema first Approach: so you define your API using the graphql Schema Definition Language. This has its own powerful code generation tools which will auto-generate all of your GraphQL code and you will just need to implement the core logic of that interface method.
I have divided this article into two phases:
We will use a video publishing site as an example in which a user can publish a video, add screenshots, add a review, and get videos and related videos.
mkdir -p $GOPATH/src/github.com/ridhamtarpara/go-graphql-demo/
Create the following schema in the project root:
type User {
id: ID!
name: String!
email: String!
}
type Video {
id: ID!
name: String!
description: String!
user: User!
url: String!
createdAt: Timestamp!
screenshots: [Screenshot]
related(limit: Int = 25, offset: Int = 0): [Video!]!
}
type Screenshot {
id: ID!
videoId: ID!
url: String!
}
input NewVideo {
name: String!
description: String!
userId: ID!
url: String!
}
type Mutation {
createVideo(input: NewVideo!): Video!
}
type Query {
Videos(limit: Int = 25, offset: Int = 0): [Video!]!
}
scalar Timestamp
Here we have defined our basic models and one mutation to publish new videos, and one query to get all videos. You can read more about the graphql schema here. We have also defined one custom type (scalar), as by default graphql has only 5 scalar types that include Int, Float, String, Boolean and ID.
So if you want to use custom type, then you can define a custom scalar in schema.graphql
(like we have defined Timestamp
) and provide its definition in code. In gqlgen, you need to provide marshal and unmarshal methods for all custom scalars and map them to gqlgen.yml
.
Another major change in gqlgen in the last version is that they have removed the dependency on compiled binaries. So add the following file to your project under scripts/gqlgen.go.
// +build ignore
package main
import "github.com/99designs/gqlgen/cmd"
func main() {
cmd.Execute()
}
and initialize dep with:
dep init
Now it’s time to take advantage of the library’s codegen feature which generates all the boring (but interesting for a few) skeleton code.
go run scripts/gqlgen.go init
which will create the following files:
gqlgen.yml — Config file to control code generation.
**generated.go **— The generated code which you might not want to see.
models_gen.go — All the models for input and type of your provided schema.
resolver.go — You need to write your implementations.
server/server.go — entry point with an http.Handler to start the GraphQL server.
Let’s have a look at one of the generated models of the Video
type:
type Video struct {
ID string `json:"id"`
Name string `json:"name"`
User User `json:"user"`
URL string `json:"url"`
CreatedAt string `json:"createdAt"`
Screenshots []*Screenshot `json:"screenshots"`
Related []Video `json:"related"`
}
Here, as you can see, ID is defined as a string and CreatedAt is also a string. Other related models are mapped accordingly, but in the real world you don’t want this — if you are using any SQL data type you want your ID field as int or int64, depending on your database.
For example I am using PostgreSQL for demo so of course I want ID as an int and CreatedAt as a time.Time. So we need to define our own model and instruct gqlgen to use our model instead of generating a new one.
type Video struct {
ID int `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
User User `json:"user"`
URL string `json:"url"`
CreatedAt time.Time `json:"createdAt"`
Related []Video
}
// Lets redefine the base ID type to use an id as int
func MarshalID(id int) graphql.Marshaler {
return graphql.WriterFunc(func(w io.Writer) {
io.WriteString(w, strconv.Quote(fmt.Sprintf("%d", id)))
})
}
// And the same for the unmarshaler
func UnmarshalID(v interface{}) (int, error) {
id, ok := v.(string)
if !ok {
return 0, fmt.Errorf("ids must be strings")
}
i, e := strconv.Atoi(id)
return int(i), e
}
func MarshalTimestamp(t time.Time) graphql.Marshaler {
timestamp := t.Unix() * 1000
return graphql.WriterFunc(func(w io.Writer) {
io.WriteString(w, strconv.FormatInt(timestamp, 10))
})
}
func UnmarshalTimestamp(v interface{}) (time.Time, error) {
if tmpStr, ok := v.(int); ok {
return time.Unix(int64(tmpStr), 0), nil
}
return time.Time{}, errors.TimeStampError
}
and update gqlgen to use these models like this:
schema:
- schema.graphql
exec:
filename: generated.go
model:
filename: models_gen.go
resolver:
filename: resolver.go
type: Resolver
models:
Video:
model: github.com/ridhamtarpara/go-graphql-demo/api.Video
ID:
model: github.com/ridhamtarpara/go-graphql-demo/api.ID
Timestamp:
model: github.com/ridhamtarpara/go-graphql-demo/api.Timestamp
So, the focal point is the custom definitions for ID and Timestamp with the marshal and unmarshal methods and their mapping in a gqlgen.yml file. Now when the user provides a string as ID, UnmarshalID will convert a string into an int. While sending the response, MarshalID will convert int to string. The same goes for Timestamp or any other custom scalar you define.
Now it’s time to implement real logic. Open resolver.go
and provide the definition to mutation and queries. The stubs are already auto-generated with a not implemented panic statement so let’s override that.
func (r *mutationResolver) CreateVideo(ctx context.Context, input NewVideo) (api.Video, error) {
newVideo := api.Video{
URL: input.URL,
Name: input.Name,
CreatedAt: time.Now().UTC(),
}
rows, err := dal.LogAndQuery(r.db, "INSERT INTO videos (name, url, user_id, created_at) VALUES($1, $2, $3, $4) RETURNING id",
input.Name, input.URL, input.UserID, newVideo.CreatedAt)
defer rows.Close()
if err != nil || !rows.Next() {
return api.Video{}, err
}
if err := rows.Scan(&newVideo.ID); err != nil {
errors.DebugPrintf(err)
if errors.IsForeignKeyError(err) {
return api.Video{}, errors.UserNotExist
}
return api.Video{}, errors.InternalServerError
}
return newVideo, nil
}
func (r *queryResolver) Videos(ctx context.Context, limit *int, offset *int) ([]api.Video, error) {
var video api.Video
var videos []api.Video
rows, err := dal.LogAndQuery(r.db, "SELECT id, name, url, created_at, user_id FROM videos ORDER BY created_at desc limit $1 offset $2", limit, offset)
defer rows.Close();
if err != nil {
errors.DebugPrintf(err)
return nil, errors.InternalServerError
}
for rows.Next() {
if err := rows.Scan(&video.ID, &video.Name, &video.URL, &video.CreatedAt, &video.UserID); err != nil {
errors.DebugPrintf(err)
return nil, errors.InternalServerError
}
videos = append(videos, video)
}
return videos, nil
}
and hit the mutation:
Ohh it worked…… but wait, why is my user empty 😦? So here there is a similar concept like lazy and eager loading. As graphQL is extensible, you need to define which fields you want to populate eagerly and which ones lazily.
I have created this golden rule for my organization team working with gqlgen:
GraphQL is a query language for APIs and runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
For our use-case, I want to load Related Videos (and even users) only if a client asks for those fields. But as we have included those fields in the models, gqlgen will assume that you will provide those values while resolving video — so currently we are getting an empty struct.
Sometimes you need a certain type of data every time, so you don’t want to load it with another query. Rather you can use something like SQL joins to improve performance. For one use-case (not included in the article), I needed video metadata every time with the video which is stored in a different place. So if I loaded it when requested, I would need another query. But as I knew my requirements (that I need it everywhere on the client side), I preferred it to load eagerly to improve the performance.
So let’s rewrite the model and regenerate the gqlgen code. For the sake of simplicity, we will only define methods for the user.
type Video struct {
ID int `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
UserID int `json:"-"`
URL string `json:"url"`
CreatedAt time.Time `json:"createdAt"`
}
So we have added UserID and removed User struct and regenerated the code:
go run scripts/gqlgen.go -v
This will generate the following interface methods to resolve the undefined structs and you need to define those in your resolver:
type VideoResolver interface {
User(ctx context.Context, obj *api.Video) (api.User, error)
Screenshots(ctx context.Context, obj *api.Video) ([]*api.Screenshot, error)
Related(ctx context.Context, obj *api.Video, limit *int, offset *int) ([]api.Video, error)
}
And here is our definition:
func (r *videoResolver) User(ctx context.Context, obj *api.Video) (api.User, error) {
rows, _ := dal.LogAndQuery(r.db,"SELECT id, name, email FROM users where id = $1", obj.UserID)
defer rows.Close()
if !rows.Next() {
return api.User{}, nil
}
var user api.User
if err := rows.Scan(&user.ID, &user.Name, &user.Email); err != nil {
errors.DebugPrintf(err)
return api.User{}, errors.InternalServerError
}
return user, nil
}
Now the result should look something like this:
So this covers the very basics of graphql and should get you started. Try a few things with graphql and the power of Golang! But before that, let’s have a look at subscription which should be included in the scope of this article.
Graphql provides subscription as an operation type which allows you to subscribe to real tile data in GraphQL. gqlgen provides web socket-based real-time subscription events.
You need to define your subscription in the schema.graphql
file. Here we are subscribing to the video publishing event.
type Subscription {
videoPublished: Video!
}
Regenerate the code by running: go run scripts/gqlgen.go -v
.
As explained earlier, it will make one interface in generated.go which you need to implement in your resolver. In our case, it looks like this:
var videoPublishedChannel map[string]chan api.Video
func init() {
videoPublishedChannel = map[string]chan api.Video{}
}
type subscriptionResolver struct{ *Resolver }
func (r *subscriptionResolver) VideoPublished(ctx context.Context) (<-chan api.Video, error) {
id := randx.String(8)
videoEvent := make(chan api.Video, 1)
go func() {
<-ctx.Done()
}()
videoPublishedChannel[id] = videoEvent
return videoEvent, nil
}
func (r *mutationResolver) CreateVideo(ctx context.Context, input NewVideo) (api.Video, error) {
// your logic ...
for _, observer := range videoPublishedChannel {
observer <- newVideo
}
return newVideo, nil
}
Now, you need to emit events when a new video is created. As you can see on line 23 we have done that.
And it’s time to test the subscription:
GraphQL comes with certain advantages, but everything that glitters is not gold. You need to take care of a few things like authorizations, query complexity, caching, N+1 query problem, rate limiting, and a few more issues — otherwise it will put you in performance jeopardy.
Every time I read a tutorial like this, I feel like I know everything I need to know and can get my all problems solved.
But when I start working on things on my own, I usually end up getting an internal server error or never-ending requests or dead ends and I have to dig deep into that to carve my way out. Hopefully we can help prevent that here.
Let’s take a look at a few advanced concepts starting with basic authentication.
In a REST API, you have a sort of authentication system and some out of the box authorizations on particular endpoints. But in GraphQL, only one endpoint is exposed so you can achieve this with schema directives.
You need to edit your schema.graphql as follows:
type Mutation {
createVideo(input: NewVideo!): Video! @isAuthenticated
}
directive @isAuthenticated on FIELD_DEFINITION
We have created an isAuthenticated directive and now we have applied that directive to createVideo
subscription. After you regenerate code you need to give a definition of the directive. Currently, directives are implemented as struct methods instead of the interface so we have to give a definition.
I have updated the generated code of server.go and created a method to return graphql config for server.go as follows:
func NewRootResolvers(db *sql.DB) Config {
c := Config{
Resolvers: &Resolver{
db: db,
},
}
// Schema Directive
c.Directives.IsAuthenticated = func(ctx context.Context, obj interface{}, next graphql.Resolver) (res interface{}, err error) {
ctxUserID := ctx.Value(UserIDCtxKey)
if ctxUserID != nil {
return next(ctx)
} else {
return nil, errors.UnauthorisedError
}
}
return c
}
rootHandler:= dataloaders.DataloaderMiddleware(
db,
handler.GraphQL(
go_graphql_demo.NewExecutableSchema(go_graphql_demo.NewRootResolvers(db)
)
)
http.Handle("/query", auth.AuthMiddleware(rootHandler))
We have read the userId from the context. Looks strange right? How was userId inserted in the context and why in context? Ok, so gqlgen only provides you the request contexts at the implementation level, so you can not read any of the HTTP request data like headers or cookies in graphql resolvers or directives. Therefore, you need to add your middleware and fetch those data and put the data in your context.
So we need to define auth middleware to fetch auth data from the request and validate.
I haven’t defined any logic there, but instead I passed the userId as authorization for demo purposes. Then chain this middleware in server.go
along with the new config loading method.
Now, the directive definition makes sense. Don’t handle unauthorized users in your middleware as it will be handled by your directive.
Demo time:
You can even pass arguments in the schema directives like this:
directive @hasRole(role: Role!) on FIELD_DEFINITION
enum Role { ADMIN USER }
This all looks fancy, doesn’t it? You are loading data when needed. Clients have control of the data, there is no under-fetching and no over-fetching. But everything comes with a cost.
So what’s the cost here? Let’s take a look at the logs while fetching all the videos. We have 8 video entries and there are 5 users.
query{
Videos(limit: 10){
name
user{
name
}
}
}
Query: Videos : SELECT id, name, description, url, created_at, user_id FROM videos ORDER BY created_at desc limit $1 offset $2
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Why 9 queries (1 videos table and 8 users table)? It looks horrible. I was just about to have a heart attack when I thought about replacing our current REST API servers with this…but dataloaders came as a complete cure for it!
This is known as the N+1 problem, There will be one query to get all the data and for each data (N) there will be another database query.
This is a very serious issue in terms of performance and resources: although these queries are parallel, they will use your resources up.
We will use the dataloaden library from the author of gqlgen. It is a Go- generated library. We will generate the dataloader for the user first.
go get github.com/vektah/dataloaden
dataloaden github.com/ridhamtarpara/go-graphql-demo/api.User
This will generate a file userloader_gen.go
which has methods like Fetch, LoadAll, and Prime.
Now, we need to define the Fetch method to get the result in bulk.
func DataloaderMiddleware(db *sql.DB, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
userloader := UserLoader{
wait : 1 * time.Millisecond,
maxBatch: 100,
fetch: func(ids []int) ([]*api.User, []error) {
var sqlQuery string
if len(ids) == 1 {
sqlQuery = "SELECT id, name, email from users WHERE id = ?"
} else {
sqlQuery = "SELECT id, name, email from users WHERE id IN (?)"
}
sqlQuery, arguments, err := sqlx.In(sqlQuery, ids)
if err != nil {
log.Println(err)
}
sqlQuery = sqlx.Rebind(sqlx.DOLLAR, sqlQuery)
rows, err := dal.LogAndQuery(db, sqlQuery, arguments...)
defer rows.Close();
if err != nil {
log.Println(err)
}
userById := map[int]*api.User{}
for rows.Next() {
user:= api.User{}
if err := rows.Scan(&user.ID, &user.Name, &user.Email); err != nil {
errors.DebugPrintf(err)
return nil, []error{errors.InternalServerError}
}
userById[user.ID] = &user
}
users := make([]*api.User, len(ids))
for i, id := range ids {
users[i] = userById[id]
i++
}
return users, nil
},
}
ctx := context.WithValue(r.Context(), CtxKey, &userloader)
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
}
Here, we are waiting for 1ms for a user to load queries and we have kept a maximum batch of 100 queries. So now, instead of firing a query for each user, dataloader will wait for either 1 millisecond for 100 users before hitting the database. We need to change our user resolver logic to use dataloader instead of the previous query logic.
func (r *videoResolver) User(ctx context.Context, obj *api.Video) (api.User, error) {
user, err := ctx.Value(dataloaders.CtxKey).(*dataloaders.UserLoader).Load(obj.UserID)
return *user, err
}
After this, my logs look like this for similar data:
Query: Videos : SELECT id, name, description, url, created_at, user_id FROM videos ORDER BY created_at desc limit $1 offset $2
Dataloader: User : SELECT id, name, email from users WHERE id IN ($1, $2, $3, $4, $5)
Now only two queries are fired, so everyone is happy. The interesting thing is that only five user keys are given to query even though 8 videos are there. So dataloader removed duplicate entries.
In GraphQL you are giving a powerful way for the client to fetch whatever they need, but this exposes you to the risk of denial of service attacks.
Let’s understand this through an example which we’ve been referring to for this whole article.
Now we have a related field in video type which returns related videos. And each related video is of the graphql video type so they all have related videos too…and this goes on.
Consider the following query to understand the severity of the situation:
{
Videos(limit: 10, offset: 0){
name
url
related(limit: 10, offset: 0){
name
url
related(limit: 10, offset: 0){
name
url
related(limit: 100, offset: 0){
name
url
}
}
}
}
}
If I add one more subobject or increase the limit to 100, then it will be millions of videos loading in one call. Perhaps (or rather definitely) this will make your database and service unresponsive.
gqlgen provides a way to define the maximum query complexity allowed in one call. You just need to add one line (Line 5 in the following snippet) in your graphql handler and define the maximum complexity (300 in our case).
rootHandler:= dataloaders.DataloaderMiddleware(
db,
handler.GraphQL(
go_graphql_demo.NewExecutableSchema(go_graphql_demo.NewRootResolvers(db)),
handler.ComplexityLimit(300)
),
)
gqlgen assigns fix complexity weight for each field so it will consider struct, array, and string all as equals. So for this query, complexity will be 12. But we know that nested fields weigh too much, so we need to tell gqlgen to calculate accordingly (in simple terms, use multiplication instead of just sum).
func NewRootResolvers(db *sql.DB) Config {
c := Config{
Resolvers: &Resolver{
db: db,
},
}
// Complexity
countComplexity := func(childComplexity int, limit *int, offset *int) int {
return *limit * childComplexity
}
c.Complexity.Query.Videos = countComplexity
c.Complexity.Video.Related = countComplexity
// Schema Directive
c.Directives.IsAuthenticated = func(ctx context.Context, obj interface{}, next graphql.Resolver) (res interface{}, err error) {
ctxUserID := ctx.Value(UserIDCtxKey)
if ctxUserID != nil {
return next(ctx)
} else {
return nil, errors.UnauthorisedError
}
}
return c
}
Just like directives, complexity is also defined as struct, so we have changed our config method accordingly.
I haven’t defined the related method logic and just returned the empty array. So related is empty in the output, but this should give you a clear idea about how to use the query complexity.
This code is on Github. You can play around with it, and if you have any questions or concerns let me know in the comment section.
☞ Build a Basic App with Spring Boot and JPA using PostgreSQL
☞ How to Install PostgreSQL on Ubuntu 18.04
☞ An Introduction to Queries in PostgreSQL
☞ GraphQL Tutorial: Understanding Spring Data JPA/SpringBoot
☞ How To Manage an SQL Database
☞ GraphQL with React: The Complete Developers Guide
☞ GraphQL with Angular & Apollo - The Full-stack Guide
☞ GraphQL: Learning GraphQL with Node.Js
☞ Complete guide to building a GraphQL API
☞ GraphQL: Introduction to GraphQL for beginners
#graphql #go
1574339995
Description
Become a Python Programmer and learn one of employer’s most requested skills of 21st century!
This is the most comprehensive, yet straight-forward, course for the Python programming language on Simpliv! Whether you have never programmed before, already know basic syntax, or want to learn about the advanced features of Python, this course is for you! In this course we will teach you Python 3. (Note, we also provide older Python 2 notes in case you need them)
With over 40 lectures and more than 3 hours of video this comprehensive course leaves no stone unturned! This course includes tests, and homework assignments as well as 3 major projects to create a Python project portfolio!
This course will teach you Python in a practical manner, with every lecture comes a full coding screencast and a corresponding code notebook! Learn in whatever manner is best for you!
We will start by helping you get Python installed on your computer, regardless of your operating system, whether its Linux, MacOS, or Windows, we’ve got you covered!
We cover a wide variety of topics, including:
Command Line Basics
Installing Python
Running Python Code
Strings
Lists
Dictionaries
Tuples
Sets
Number Data Types
Print Formatting
Functions
Scope
Built-in Functions
Debugging and Error Handling
Modules
External Modules
Object Oriented Programming
Inheritance
Polymorphism
File I/O
Web scrapping
Database Connection
Email sending
and much more!
Project that we will complete:
Guess the number
Guess the word using speech recognition
Love Calculator
google search in python
Image download from a link
Click and save image using openCV
Ludo game dice simulator
open wikipedia on command prompt
Password generator
QR code reader and generator
You will get lifetime access to over 40 lectures.
So what are you waiting for? Learn Python in a way that will advance your career and increase your knowledge, all in a fun and practical way!
Basic knowledge
Basic programming concept in any language will help but not require to attend this tutorial
What will you learn
Learn to use Python professionally, learning both Python 2 and Python 3!
Create games with Python, like Tic Tac Toe and Blackjack!
Learn advanced Python features, like the collections module and how to work with timestamps!
Learn to use Object Oriented Programming with classes!
Understand complex topics, like decorators.
Understand how to use both the pycharm and create .py files
Get an understanding of how to create GUIs in the pycharm!
Build a complete understanding of Python from the ground up!
#Learn Python #Learn Python from Basic #Python from Basic to Advance #Python from Basic to Advance with Projects #Learn Python from Basic to Advance with Projects in a day
1626279136
Without proper SEO it will take more time to rank your blog posts and drive organic traffic from Google Search Results. If you have started a blog on Google Blogger and want to enable Blogger Advanced SEO settings, then stay with me till the end of this guide.
#blogger #advanced #seo settings #blogger advanced seo settings #blogger advanced settings
1560832411
GraphQL has become a buzzword over the last few years after Facebook made it open-source. I have tried GraphQL with the Node.js, and I agree with all the buzz about the advantages and simplicity of GraphQL.
So what is GraphQL? This is what the official GraphQL definition says:
GraphQL is a query language for APIs and runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
I recently switched to Golang for a new project I’m working on (from Node.js) and I decided to try GraphQL with it. There are not many library options with Golang but I have tried it with Thunder, graphql, graphql-go, and gqlgen. And I have to say that gqlgen is winning among all the libraries I have tried.
gqlgen is still in beta with latest version 0.7.2 at the time of writing this article, and it’s rapidly evolving. You can find their road-map here. And now 99designs is officially sponsoring them, so we will see even better development speed for this awesome open source project. vektah and neelance are major contributors, and neelance also wrote graphql-go.
So let’s dive into the library semantics assuming you have basic GraphQL knowledge.
As their headline states,
GraphQL is a query language for APIs and runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
I think this is the most promising thing about the library: you will never seemap[string]interface{}
here, as it uses a strictly typed approach.
Apart from that, it uses a Schema first Approach: so you define your API using the graphql Schema Definition Language. This has its own powerful code generation tools which will auto-generate all of your GraphQL code and you will just need to implement the core logic of that interface method.
I have divided this article into two phases:
We will use a video publishing site as an example in which a user can publish a video, add screenshots, add a review, and get videos and related videos.
mkdir -p $GOPATH/src/github.com/ridhamtarpara/go-graphql-demo/
Create the following schema in the project root:
type User {
id: ID!
name: String!
email: String!
}
type Video {
id: ID!
name: String!
description: String!
user: User!
url: String!
createdAt: Timestamp!
screenshots: [Screenshot]
related(limit: Int = 25, offset: Int = 0): [Video!]!
}
type Screenshot {
id: ID!
videoId: ID!
url: String!
}
input NewVideo {
name: String!
description: String!
userId: ID!
url: String!
}
type Mutation {
createVideo(input: NewVideo!): Video!
}
type Query {
Videos(limit: Int = 25, offset: Int = 0): [Video!]!
}
scalar Timestamp
Here we have defined our basic models and one mutation to publish new videos, and one query to get all videos. You can read more about the graphql schema here. We have also defined one custom type (scalar), as by default graphql has only 5 scalar types that include Int, Float, String, Boolean and ID.
So if you want to use custom type, then you can define a custom scalar in schema.graphql
(like we have defined Timestamp
) and provide its definition in code. In gqlgen, you need to provide marshal and unmarshal methods for all custom scalars and map them to gqlgen.yml
.
Another major change in gqlgen in the last version is that they have removed the dependency on compiled binaries. So add the following file to your project under scripts/gqlgen.go.
// +build ignore
package main
import "github.com/99designs/gqlgen/cmd"
func main() {
cmd.Execute()
}
and initialize dep with:
dep init
Now it’s time to take advantage of the library’s codegen feature which generates all the boring (but interesting for a few) skeleton code.
go run scripts/gqlgen.go init
which will create the following files:
gqlgen.yml — Config file to control code generation.
**generated.go **— The generated code which you might not want to see.
models_gen.go — All the models for input and type of your provided schema.
resolver.go — You need to write your implementations.
server/server.go — entry point with an http.Handler to start the GraphQL server.
Let’s have a look at one of the generated models of the Video
type:
type Video struct {
ID string `json:"id"`
Name string `json:"name"`
User User `json:"user"`
URL string `json:"url"`
CreatedAt string `json:"createdAt"`
Screenshots []*Screenshot `json:"screenshots"`
Related []Video `json:"related"`
}
Here, as you can see, ID is defined as a string and CreatedAt is also a string. Other related models are mapped accordingly, but in the real world you don’t want this — if you are using any SQL data type you want your ID field as int or int64, depending on your database.
For example I am using PostgreSQL for demo so of course I want ID as an int and CreatedAt as a time.Time. So we need to define our own model and instruct gqlgen to use our model instead of generating a new one.
type Video struct {
ID int `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
User User `json:"user"`
URL string `json:"url"`
CreatedAt time.Time `json:"createdAt"`
Related []Video
}
// Lets redefine the base ID type to use an id as int
func MarshalID(id int) graphql.Marshaler {
return graphql.WriterFunc(func(w io.Writer) {
io.WriteString(w, strconv.Quote(fmt.Sprintf("%d", id)))
})
}
// And the same for the unmarshaler
func UnmarshalID(v interface{}) (int, error) {
id, ok := v.(string)
if !ok {
return 0, fmt.Errorf("ids must be strings")
}
i, e := strconv.Atoi(id)
return int(i), e
}
func MarshalTimestamp(t time.Time) graphql.Marshaler {
timestamp := t.Unix() * 1000
return graphql.WriterFunc(func(w io.Writer) {
io.WriteString(w, strconv.FormatInt(timestamp, 10))
})
}
func UnmarshalTimestamp(v interface{}) (time.Time, error) {
if tmpStr, ok := v.(int); ok {
return time.Unix(int64(tmpStr), 0), nil
}
return time.Time{}, errors.TimeStampError
}
and update gqlgen to use these models like this:
schema:
- schema.graphql
exec:
filename: generated.go
model:
filename: models_gen.go
resolver:
filename: resolver.go
type: Resolver
models:
Video:
model: github.com/ridhamtarpara/go-graphql-demo/api.Video
ID:
model: github.com/ridhamtarpara/go-graphql-demo/api.ID
Timestamp:
model: github.com/ridhamtarpara/go-graphql-demo/api.Timestamp
So, the focal point is the custom definitions for ID and Timestamp with the marshal and unmarshal methods and their mapping in a gqlgen.yml file. Now when the user provides a string as ID, UnmarshalID will convert a string into an int. While sending the response, MarshalID will convert int to string. The same goes for Timestamp or any other custom scalar you define.
Now it’s time to implement real logic. Open resolver.go
and provide the definition to mutation and queries. The stubs are already auto-generated with a not implemented panic statement so let’s override that.
func (r *mutationResolver) CreateVideo(ctx context.Context, input NewVideo) (api.Video, error) {
newVideo := api.Video{
URL: input.URL,
Name: input.Name,
CreatedAt: time.Now().UTC(),
}
rows, err := dal.LogAndQuery(r.db, "INSERT INTO videos (name, url, user_id, created_at) VALUES($1, $2, $3, $4) RETURNING id",
input.Name, input.URL, input.UserID, newVideo.CreatedAt)
defer rows.Close()
if err != nil || !rows.Next() {
return api.Video{}, err
}
if err := rows.Scan(&newVideo.ID); err != nil {
errors.DebugPrintf(err)
if errors.IsForeignKeyError(err) {
return api.Video{}, errors.UserNotExist
}
return api.Video{}, errors.InternalServerError
}
return newVideo, nil
}
func (r *queryResolver) Videos(ctx context.Context, limit *int, offset *int) ([]api.Video, error) {
var video api.Video
var videos []api.Video
rows, err := dal.LogAndQuery(r.db, "SELECT id, name, url, created_at, user_id FROM videos ORDER BY created_at desc limit $1 offset $2", limit, offset)
defer rows.Close();
if err != nil {
errors.DebugPrintf(err)
return nil, errors.InternalServerError
}
for rows.Next() {
if err := rows.Scan(&video.ID, &video.Name, &video.URL, &video.CreatedAt, &video.UserID); err != nil {
errors.DebugPrintf(err)
return nil, errors.InternalServerError
}
videos = append(videos, video)
}
return videos, nil
}
and hit the mutation:
Ohh it worked…… but wait, why is my user empty 😦? So here there is a similar concept like lazy and eager loading. As graphQL is extensible, you need to define which fields you want to populate eagerly and which ones lazily.
I have created this golden rule for my organization team working with gqlgen:
GraphQL is a query language for APIs and runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
For our use-case, I want to load Related Videos (and even users) only if a client asks for those fields. But as we have included those fields in the models, gqlgen will assume that you will provide those values while resolving video — so currently we are getting an empty struct.
Sometimes you need a certain type of data every time, so you don’t want to load it with another query. Rather you can use something like SQL joins to improve performance. For one use-case (not included in the article), I needed video metadata every time with the video which is stored in a different place. So if I loaded it when requested, I would need another query. But as I knew my requirements (that I need it everywhere on the client side), I preferred it to load eagerly to improve the performance.
So let’s rewrite the model and regenerate the gqlgen code. For the sake of simplicity, we will only define methods for the user.
type Video struct {
ID int `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
UserID int `json:"-"`
URL string `json:"url"`
CreatedAt time.Time `json:"createdAt"`
}
So we have added UserID and removed User struct and regenerated the code:
go run scripts/gqlgen.go -v
This will generate the following interface methods to resolve the undefined structs and you need to define those in your resolver:
type VideoResolver interface {
User(ctx context.Context, obj *api.Video) (api.User, error)
Screenshots(ctx context.Context, obj *api.Video) ([]*api.Screenshot, error)
Related(ctx context.Context, obj *api.Video, limit *int, offset *int) ([]api.Video, error)
}
And here is our definition:
func (r *videoResolver) User(ctx context.Context, obj *api.Video) (api.User, error) {
rows, _ := dal.LogAndQuery(r.db,"SELECT id, name, email FROM users where id = $1", obj.UserID)
defer rows.Close()
if !rows.Next() {
return api.User{}, nil
}
var user api.User
if err := rows.Scan(&user.ID, &user.Name, &user.Email); err != nil {
errors.DebugPrintf(err)
return api.User{}, errors.InternalServerError
}
return user, nil
}
Now the result should look something like this:
So this covers the very basics of graphql and should get you started. Try a few things with graphql and the power of Golang! But before that, let’s have a look at subscription which should be included in the scope of this article.
Graphql provides subscription as an operation type which allows you to subscribe to real tile data in GraphQL. gqlgen provides web socket-based real-time subscription events.
You need to define your subscription in the schema.graphql
file. Here we are subscribing to the video publishing event.
type Subscription {
videoPublished: Video!
}
Regenerate the code by running: go run scripts/gqlgen.go -v
.
As explained earlier, it will make one interface in generated.go which you need to implement in your resolver. In our case, it looks like this:
var videoPublishedChannel map[string]chan api.Video
func init() {
videoPublishedChannel = map[string]chan api.Video{}
}
type subscriptionResolver struct{ *Resolver }
func (r *subscriptionResolver) VideoPublished(ctx context.Context) (<-chan api.Video, error) {
id := randx.String(8)
videoEvent := make(chan api.Video, 1)
go func() {
<-ctx.Done()
}()
videoPublishedChannel[id] = videoEvent
return videoEvent, nil
}
func (r *mutationResolver) CreateVideo(ctx context.Context, input NewVideo) (api.Video, error) {
// your logic ...
for _, observer := range videoPublishedChannel {
observer <- newVideo
}
return newVideo, nil
}
Now, you need to emit events when a new video is created. As you can see on line 23 we have done that.
And it’s time to test the subscription:
GraphQL comes with certain advantages, but everything that glitters is not gold. You need to take care of a few things like authorizations, query complexity, caching, N+1 query problem, rate limiting, and a few more issues — otherwise it will put you in performance jeopardy.
Every time I read a tutorial like this, I feel like I know everything I need to know and can get my all problems solved.
But when I start working on things on my own, I usually end up getting an internal server error or never-ending requests or dead ends and I have to dig deep into that to carve my way out. Hopefully we can help prevent that here.
Let’s take a look at a few advanced concepts starting with basic authentication.
In a REST API, you have a sort of authentication system and some out of the box authorizations on particular endpoints. But in GraphQL, only one endpoint is exposed so you can achieve this with schema directives.
You need to edit your schema.graphql as follows:
type Mutation {
createVideo(input: NewVideo!): Video! @isAuthenticated
}
directive @isAuthenticated on FIELD_DEFINITION
We have created an isAuthenticated directive and now we have applied that directive to createVideo
subscription. After you regenerate code you need to give a definition of the directive. Currently, directives are implemented as struct methods instead of the interface so we have to give a definition.
I have updated the generated code of server.go and created a method to return graphql config for server.go as follows:
func NewRootResolvers(db *sql.DB) Config {
c := Config{
Resolvers: &Resolver{
db: db,
},
}
// Schema Directive
c.Directives.IsAuthenticated = func(ctx context.Context, obj interface{}, next graphql.Resolver) (res interface{}, err error) {
ctxUserID := ctx.Value(UserIDCtxKey)
if ctxUserID != nil {
return next(ctx)
} else {
return nil, errors.UnauthorisedError
}
}
return c
}
rootHandler:= dataloaders.DataloaderMiddleware(
db,
handler.GraphQL(
go_graphql_demo.NewExecutableSchema(go_graphql_demo.NewRootResolvers(db)
)
)
http.Handle("/query", auth.AuthMiddleware(rootHandler))
We have read the userId from the context. Looks strange right? How was userId inserted in the context and why in context? Ok, so gqlgen only provides you the request contexts at the implementation level, so you can not read any of the HTTP request data like headers or cookies in graphql resolvers or directives. Therefore, you need to add your middleware and fetch those data and put the data in your context.
So we need to define auth middleware to fetch auth data from the request and validate.
I haven’t defined any logic there, but instead I passed the userId as authorization for demo purposes. Then chain this middleware in server.go
along with the new config loading method.
Now, the directive definition makes sense. Don’t handle unauthorized users in your middleware as it will be handled by your directive.
Demo time:
You can even pass arguments in the schema directives like this:
directive @hasRole(role: Role!) on FIELD_DEFINITION
enum Role { ADMIN USER }
This all looks fancy, doesn’t it? You are loading data when needed. Clients have control of the data, there is no under-fetching and no over-fetching. But everything comes with a cost.
So what’s the cost here? Let’s take a look at the logs while fetching all the videos. We have 8 video entries and there are 5 users.
query{
Videos(limit: 10){
name
user{
name
}
}
}
Query: Videos : SELECT id, name, description, url, created_at, user_id FROM videos ORDER BY created_at desc limit $1 offset $2
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Resolver: User : SELECT id, name, email FROM users where id = $1
Why 9 queries (1 videos table and 8 users table)? It looks horrible. I was just about to have a heart attack when I thought about replacing our current REST API servers with this…but dataloaders came as a complete cure for it!
This is known as the N+1 problem, There will be one query to get all the data and for each data (N) there will be another database query.
This is a very serious issue in terms of performance and resources: although these queries are parallel, they will use your resources up.
We will use the dataloaden library from the author of gqlgen. It is a Go- generated library. We will generate the dataloader for the user first.
go get github.com/vektah/dataloaden
dataloaden github.com/ridhamtarpara/go-graphql-demo/api.User
This will generate a file userloader_gen.go
which has methods like Fetch, LoadAll, and Prime.
Now, we need to define the Fetch method to get the result in bulk.
func DataloaderMiddleware(db *sql.DB, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
userloader := UserLoader{
wait : 1 * time.Millisecond,
maxBatch: 100,
fetch: func(ids []int) ([]*api.User, []error) {
var sqlQuery string
if len(ids) == 1 {
sqlQuery = "SELECT id, name, email from users WHERE id = ?"
} else {
sqlQuery = "SELECT id, name, email from users WHERE id IN (?)"
}
sqlQuery, arguments, err := sqlx.In(sqlQuery, ids)
if err != nil {
log.Println(err)
}
sqlQuery = sqlx.Rebind(sqlx.DOLLAR, sqlQuery)
rows, err := dal.LogAndQuery(db, sqlQuery, arguments...)
defer rows.Close();
if err != nil {
log.Println(err)
}
userById := map[int]*api.User{}
for rows.Next() {
user:= api.User{}
if err := rows.Scan(&user.ID, &user.Name, &user.Email); err != nil {
errors.DebugPrintf(err)
return nil, []error{errors.InternalServerError}
}
userById[user.ID] = &user
}
users := make([]*api.User, len(ids))
for i, id := range ids {
users[i] = userById[id]
i++
}
return users, nil
},
}
ctx := context.WithValue(r.Context(), CtxKey, &userloader)
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
}
Here, we are waiting for 1ms for a user to load queries and we have kept a maximum batch of 100 queries. So now, instead of firing a query for each user, dataloader will wait for either 1 millisecond for 100 users before hitting the database. We need to change our user resolver logic to use dataloader instead of the previous query logic.
func (r *videoResolver) User(ctx context.Context, obj *api.Video) (api.User, error) {
user, err := ctx.Value(dataloaders.CtxKey).(*dataloaders.UserLoader).Load(obj.UserID)
return *user, err
}
After this, my logs look like this for similar data:
Query: Videos : SELECT id, name, description, url, created_at, user_id FROM videos ORDER BY created_at desc limit $1 offset $2
Dataloader: User : SELECT id, name, email from users WHERE id IN ($1, $2, $3, $4, $5)
Now only two queries are fired, so everyone is happy. The interesting thing is that only five user keys are given to query even though 8 videos are there. So dataloader removed duplicate entries.
In GraphQL you are giving a powerful way for the client to fetch whatever they need, but this exposes you to the risk of denial of service attacks.
Let’s understand this through an example which we’ve been referring to for this whole article.
Now we have a related field in video type which returns related videos. And each related video is of the graphql video type so they all have related videos too…and this goes on.
Consider the following query to understand the severity of the situation:
{
Videos(limit: 10, offset: 0){
name
url
related(limit: 10, offset: 0){
name
url
related(limit: 10, offset: 0){
name
url
related(limit: 100, offset: 0){
name
url
}
}
}
}
}
If I add one more subobject or increase the limit to 100, then it will be millions of videos loading in one call. Perhaps (or rather definitely) this will make your database and service unresponsive.
gqlgen provides a way to define the maximum query complexity allowed in one call. You just need to add one line (Line 5 in the following snippet) in your graphql handler and define the maximum complexity (300 in our case).
rootHandler:= dataloaders.DataloaderMiddleware(
db,
handler.GraphQL(
go_graphql_demo.NewExecutableSchema(go_graphql_demo.NewRootResolvers(db)),
handler.ComplexityLimit(300)
),
)
gqlgen assigns fix complexity weight for each field so it will consider struct, array, and string all as equals. So for this query, complexity will be 12. But we know that nested fields weigh too much, so we need to tell gqlgen to calculate accordingly (in simple terms, use multiplication instead of just sum).
func NewRootResolvers(db *sql.DB) Config {
c := Config{
Resolvers: &Resolver{
db: db,
},
}
// Complexity
countComplexity := func(childComplexity int, limit *int, offset *int) int {
return *limit * childComplexity
}
c.Complexity.Query.Videos = countComplexity
c.Complexity.Video.Related = countComplexity
// Schema Directive
c.Directives.IsAuthenticated = func(ctx context.Context, obj interface{}, next graphql.Resolver) (res interface{}, err error) {
ctxUserID := ctx.Value(UserIDCtxKey)
if ctxUserID != nil {
return next(ctx)
} else {
return nil, errors.UnauthorisedError
}
}
return c
}
Just like directives, complexity is also defined as struct, so we have changed our config method accordingly.
I haven’t defined the related method logic and just returned the empty array. So related is empty in the output, but this should give you a clear idea about how to use the query complexity.
This code is on Github. You can play around with it, and if you have any questions or concerns let me know in the comment section.
☞ Build a Basic App with Spring Boot and JPA using PostgreSQL
☞ How to Install PostgreSQL on Ubuntu 18.04
☞ An Introduction to Queries in PostgreSQL
☞ GraphQL Tutorial: Understanding Spring Data JPA/SpringBoot
☞ How To Manage an SQL Database
☞ GraphQL with React: The Complete Developers Guide
☞ GraphQL with Angular & Apollo - The Full-stack Guide
☞ GraphQL: Learning GraphQL with Node.Js
☞ Complete guide to building a GraphQL API
☞ GraphQL: Introduction to GraphQL for beginners
#graphql #go
1604922206
Advanced Excel Certification offers numerous job opportunities that have come up. Lately, companies search for a talented personality who holds great knowledge in excel. However, simply basic knowledge isn’t sufficient. If you would like to be a part of a well-renowned company then you want to have the excel certification matching industrial standards.
Whether you’re seeking higher growth within an equivalent company or expecting an honest hike from the new company, complicated excel training courses with certification can surely increase your chances to be on the brink of the success ladder. Join an advanced online excel training class and improve your skills.
Know More About Advanced Excel?
The word itself explains the meaning of this course. this is often one quite skill that sets a learning benchmark for MS Excel. It offers a transparent insight to all or any of the simplest and therefore the most advanced features that are now available within the current version of Microsoft Excel.
In this competitive era where your colleagues would equally be striving to urge a far better post than you, if you excel yourself in some good certification courses then surely there’s no looking back for you.
This type of certification is all about brushing up your administration, management, and analytical skills which in today’s market is sort of important. To match up with the flexible needs of the clients, it’s important for you to be advanced and for this such training can certainly be helpful.
Some Mind-Blowing Benefits You Get:
There are ample Excel Training Courses that you simply may encounter, but choosing a certification course in Advanced excel possesses its perks for you also as for the corporate. Listed are a few that you simply got to know.
1.There is a superior recognition that you simply get
2.As compared to non-certified professionals, you occupy the highest at the competition
3.Employers will have you ever within the priority for giant important projects
4.If you’re a freelancer, then such advanced training is often an excellent learning experience
5.For those that wish to urge within the management, the world can have a boosting knowledge
6.Administration skills also get brushed up and a replacement range of job opportunities opens
7.There is an honest hike in PayScale soon after you show your skills and certification to your HR
Quick Tip which will Help:
If you’re getting to join a web course to urge such certification then see thereto that the trainer who is going to be taking care of you during this course is very experienced and may provide you with the simplest possible assistance.
Now you’ll boost your knowledge during a spreadsheet, play with new financial
#advanced excel online training #advanced excel online course #advanced excel training #advanced excel course #advanced excel training in noida #advanced excel training in delhi
1618317562
View more: https://www.inexture.com/services/deep-learning-development/
We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.
#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services