1575363727
GraphQL is a query language made by Facebook for sending requests over the internet. It uses its own query but still sends data over HTTP. It uses one endpoint only for sending data.
The benefits of using GraphQL include being able to specify data types for the data fields you are sending and being able to specify the types of data fields that are returned.
The syntax is easy to understand, and it is simple. The data are still returned in JSON for easy access and manipulation. This is why GraphQL has been gaining traction in recent years.
GraphQL requests are still HTTP requests. However, you are always sending and getting data over one endpoint. Usually, this is the graphql
endpoint. All requests are POST requests, no matter if you are getting, manipulating, or deleting data.
To distinguish between getting and manipulating data, GraphQL requests can be classified as queries and mutations. Below is one example of a GraphQL request:
{
getPhotos(page: 1) {
photos {
id
fileLocation
description
tags
}
page
totalPhotos
}
}
With this request, we are instructing the server to call the getPhotos
resolver, which is a function that will return the data, with the argument page
set to 1. We also want to get back the id
, fileLocation
, description
, and tags
field of the photos
array, as well as the page
and totalPhotos
fields.
GraphQL APIs can use any database system since it only changes the API layer. The logic underneath is still the same as any REST API.
Node.js with Express has great support for making GraphQL APIs. We can use the express-graphql
library to build our GraphQL API. It is a middleware that allows you get GraphQL functionality in your Express back-end app.
In this story, we will build an image gallery app with a GraphQL API that accepts file uploads. We’ll add some text data using Express and include an Angular front end that uses Material Design with Angular Material.
We start with the back end part of our image-gallery app. To start building the app, we create a new project folder with a backend
folder inside to store the back-end files.
Then we go into the folder and run npx express-generator
to generate the files for the app.
After that, we need to install some files to let us use the latest features of JavaScript in our app.
First, we install packages for the skeleton app that we generated by running npm i
.
Then we run npm i @babel/cli @babel/core @babel/node @babel/preset-env
to install the latest Babel packages to get the latest JavaScript features into our app.
Next, we need to install nodemon
globally to let us automatically restart our app during development as our code file changes. Make a file called .babelrc
at the root level of the back-end app’s project folder, and add the following:
{
"presets": [
"@babel/preset-env"
],
}
Then in the scripts
section of package.json
, we put:
"babel-node": "babel-node",
"start": "nodemon --exec npm run babel-node -- ./bin/www"
This allows us to run our app with the latest JavaScript features available. If you get errors, uninstall previous versions of the Babel CLI and Babel Core packages, and try the steps above again. ./bin/www
is the entry for the back-end app.
Next, we need to use Sequelize CLI to add the initial ORM code to our back-end app. To do this, we run npx sequelize-cli init
to add the ORM code into our app. You should have config/config.json
and a models
folder created. Then we run npm i sequelize
to install the Sequelize library.
Then we can make our model by running npx sequelize-cli model:generate --name Photo --attributes fileLocation:string,description:string,tags:string
This will create the photo model and a photos table in our database when we run the migration that was created with that command.
Next, we rename config.json
to config.js
and install the dotenv
and Postgres packages by running npm i pg pg-hstore
.
Then in config/config.js
, we put:
require('dotenv').config();
const dbHost = process.env.DB_HOST;
const dbName = process.env.DB_NAME;
const dbUsername = process.env.DB_USERNAME;
const dbPassword = process.env.DB_PASSWORD;
const dbPort = process.env.DB_PORT || 5432;
module.exports = {
development: {
username: dbUsername,
password: dbPassword,
database: dbName,
host: dbHost,
port: dbPort,
dialect: 'postgres'
},
test: {
username: dbUsername,
password: dbPassword,
database: 'graphql_app_test',
host: dbHost,
port: dbPort,
dialect: 'postgres'
},
production: {
use_env_variable: 'DATABASE_URL',
username: dbUsername,
password: dbPassword,
database: dbName,
host: dbHost,
port: dbPort,
dialect: 'postgres'
}
};
This lets us get our database credentials and name from the .env
file we made in the root of the back-end app’s project folder. We have to make an empty database before running our migration. Create an empty database with the name of your choice. Set the name for the value of the DB_NAME
key of the .env
file, and do the same with the database password.
Now we have everything to run our migration. We run it by running npx sequelize-cli db:migrate
. You should have an empty table with the photos table.
Next, we make thefiles
folder and put an empty .gitkeep
file in it so we can commit it.
After the database connection is established, we can start building the logic. Since we are building a GraphQL API, we need to install the GraphQL libraries for Express.
To do this, we run npm i cors express-graphql graphql graphql-tools graphql-upload
. We need the cors
library so that we can communicate with our front-end app, which will be hosted in a different domain. The other ones are GraphQL libraries. graphql-upload
will allow us to accept files easily in our GraphQL endpoints. You can just pass a JavaScript file object straight in, and it can be saved to disk after converting it to a read stream.
After installing the libraries, we need to write the logic for our app. We make a folder called graphql
in the root folder of our back-end app, which will hold the files with the logic for our app. Next, we make a file called resolvers.js
and add the following:
const Op = require('sequelize').Op;
const models = require('../models');
const fs = require('fs');
const storeFS = ({ stream, filename }) => {
const uploadDir = '../backend/photos';
const path = `${uploadDir}/${filename}`;
return new Promise((resolve, reject) =>
stream
.on('error', error => {
if (stream.truncated)
// delete the truncated file
fs.unlinkSync(path);
reject(error);
})
.pipe(fs.createWriteStream(path))
.on('error', error => reject(error))
.on('finish', () => resolve({ path }))
);
}
export const getPhotos = async (args) => {
const page = args.page;
const photos = await models.Photo.findAll({
offset: (page - 1) * 10,
limit: 10
});
const totalPhotos = await models.Photo.count();
return {
photos,
page,
totalPhotos
};
}
export const addPhoto = async (args) => {
const { description, tags } = args;
const { filename, mimetype, createReadStream } = await args.file;
const stream = createReadStream();
const pathObj = await storeFS({ stream, filename });
const fileLocation = pathObj.path;
const photo = await models.Photo.create({
fileLocation,
description,
tags
})
return photo;
}
export const editPhoto = async (args) => {
const { id, description, tags } = args;
const { filename, mimetype, createReadStream } = await args.file;
const stream = createReadStream();
const pathObj = await storeFS({ stream, filename });
const fileLocation = pathObj.path;
const photo = await models.Photo.update({
fileLocation,
description,
tags
}, {
where: {
id
}
})
return photo;
}
export const deletePhoto = async (args) => {
const { id } = args;
await models.Photo.destroy({
where: {
id
}
})
return id;
}
export const searchPhotos = async (args) => {
const searchQuery = args.searchQuery;
const photos = await models.Photo.findAll({
where: {
[Op.or]: [
{
description: {
[Op.like]: `%${searchQuery}%`
}
},
{
tags: {
[Op.like]: `%${searchQuery}%`
}
}
]
}
});
const totalPhotos = await models.Photo.count();
return {
photos,
totalPhotos
};
}
In the code above, we have the resolvers which the GraphQL requests will ultimately be directed to.
We have resolvers for adding a photo by accepting a file along with its description and tags strings. Edit endpoint is similar except that is also accepts an ID, which is an integer, and allows users to save their photo. The delete
resolver takes an ID and lets people delete their photo table entry. Note that all the arguments for the request are in the args
parameter.
The file that we upload ends up as promised in the args
object. We can get it easily, convert it to a stream, and save it as we did with the storeFS
function. We return a promise to easily save the data and save the text data to the database sequentially.
The searchPhotos
resolver, take a string for the search query and then does a where…or
query in the database with the following object:
where: {
[Op.or]: [
{
description: {
[Op.like]: `%${searchQuery}%`
}
},
{
tags: {
[Op.like]: `%${searchQuery}%`
}
}
]
}
This searches both the description and the tags column for the search query.
Next, we create a file called schema.js
in the graphql
folder and add the following:
const { buildSchema } = require('graphql');
export const schema = buildSchema( `
scalar Upload
type Photo {
id: Int,
fileLocation: String,
description: String,
tags: String
}
type PhotoData {
photos: [Photo],
page: Int,
totalPhotos: Int
}
type Query {
getPhotos(page: Int): PhotoData,
searchPhotos(searchQuery: String): PhotoData
}
type Mutation {
addPhoto(file: Upload!, description: String, tags: String): Photo
editPhoto(id: Int, file: Upload!, description: String, tags: String): Photo
deletePhoto(id: Int): Int
}
`);
When we define the types of data for our queries and mutations, note that we also defined a new scalar type called Upload
in the file to enable us to take file data with the graphql-upload
library.
Query
includes all your queries. The code left of the colon is the function signature for your resolvers, and the right side is the data type it returns.
Types Photo
and PhotoData
are types we defined by adding fields of the scalar types.
Int
andString
are basic types that are included with the express-graphql
package. Anything with an exclamation mark is required.
The buildSchema
function builds the schema which we will use with the Express GraphQL middleware.
getPhotos
and searchPhotos
are the query endpoints addPhoto
, editPhoto
, and deletePhoto
. We may call these endpoints in our requests as we do in the example at the beginning of the story.
Next in app.js
, we put the following:
const createError = require('http-errors');
const express = require('express');
const path = require('path');
const cookieParser = require('cookie-parser');
const logger = require('morgan');
const expressGraphql = require('express-graphql');
const cors = require('cors');
const app = express();
import { GraphQLUpload } from 'graphql-upload'
import { schema } from './graphql/schema'
import {
getPhotos,
addPhoto,
editPhoto,
deletePhoto,
searchPhotos
} from './graphql/resolvers'
import { graphqlUploadExpress } from 'graphql-upload'
const root = {
Upload: GraphQLUpload,
getPhotos,
addPhoto,
editPhoto,
deletePhoto,
searchPhotos
}
// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');
app.use(cors());
app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'public')));
app.use('/photos', express.static(path.join(__dirname, 'photos')));
app.use(
'/graphql',
graphqlUploadExpress({ maxFileSize: 10000000, maxFiles: 10 }),
expressGraphql({
schema,
rootValue: root,
graphiql: true
})
)
// catch 404 and forward to error handler
app.use(function (req, res, next) {
next(createError(404));
});
// error handler
app.use(function (err, req, res, next) {
// set locals, only providing error in development
res.locals.message = err.message;
res.locals.error = req.app.get('env') === 'development' ? err : {};
// render the error page
res.status(err.status || 500);
res.render('error');
});
module.exports = app;
We include the CORS middleware for cross-domain communication, and the only endpoint that we have is the graphql
endpoint. We have the graphqlUploadExpress({ maxFileSize: 10000000, maxFiles: 10 })
in the argument to enable file uploads. Now we have:
const root = {
Upload: GraphQLUpload,
getPhotos,
addPhoto,
editPhoto,
deletePhoto,
searchPhotos
}
and
expressGraphql({
schema,
rootValue: root,
graphiql: true
})
to connect the schema and resolvers together and enable our GraphQL endpoints. graphiql: true
enables an interactive sandbox from which we can test our GraphQL requests.
Finally, in bin/www
, we have:
#!/usr/bin/env node
require('dotenv').config();
/**
* Module dependencies.
*/
var app = require('../app');
var debug = require('debug')('backend:server');
var http = require('http');
/**
* Get port from environment and store in Express.
*/
var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);
/**
* Create HTTP server.
*/
var server = http.createServer(app);
/**
* Listen on provided port, on all network interfaces.
*/
server.listen(port);
server.on('error', onError);
server.on('listening', onListening);
/**
* Normalize a port into a number, string, or false.
*/
function normalizePort(val) {
var port = parseInt(val, 10);
if (isNaN(port)) {
// named pipe
return val;
}
if (port >= 0) {
// port number
return port;
}
return false;
}
/**
* Event listener for HTTP server "error" event.
*/
function onError(error) {
if (error.syscall !== 'listen') {
throw error;
}
var bind = typeof port === 'string'
? 'Pipe ' + port
: 'Port ' + port;
// handle specific listen errors with friendly messages
switch (error.code) {
case 'EACCES':
console.error(bind + ' requires elevated privileges');
process.exit(1);
break;
case 'EADDRINUSE':
console.error(bind + ' is already in use');
process.exit(1);
break;
default:
throw error;
}
}
/**
* Event listener for HTTP server "listening" event.
*/
function onListening() {
var addr = server.address();
var bind = typeof addr === 'string'
? 'pipe ' + addr
: 'port ' + addr.port;
debug('Listening on ' + bind);
}
Together, these code files will enable us to run our GraphQL API with npm start
.
Next, we build the front-end app. First, install the Angular CLI by running npm i -g @angular/cli
.
Then go to the root of the project folder, and run ng new frontend
to scaffold the front-end app. Make sure routing and SCSS are selected when asked if you want to include routing and styling options respectively.
Next, we install our libraries: We need a GraphQL client, Angular Material, and a Flux library for storing the state of our app. We install those by running npm i @ngrx/store @angular/cdk @angular/material
. This command will install the Flux library and Angular Material respectively.
Next, we run ng add @ngrx/store
to run the skeleton code for the NgRx store.
To install Angular Apollo, which is the GraphQL client for Angular, we run ng add apollo-angular
. This will add a new module and other code to enable us to use GraphQL in our Angular app.
The front-end app will consist of the page where users can get and search their photos and another page where they can upload new photos and edit or delete existing ones. The page where they can get or search their photos will be the home page. It will have a left side menu for navigation.
Now we are ready to write the code. We first run some commands to create some new code files:
ng g component editPhotoDialog --module app
ng g component homePage --module app
ng g component topBar --module app
ng g component uploadPage --module app
ng g service photo --module app
Note that we have to specify the module we want to add the code to by adding the --module app
option so they can be used in our main app
module.
In photo.service.ts
, which should be created from those commands, we put:
import { Injectable } from '@angular/core';
import { Apollo } from 'apollo-angular';
import gql from 'graphql-tag';
@Injectable({
providedIn: 'root'
})
export class PhotoService {
constructor(
private apollo: Apollo
) { }
addPhoto(file: File, description: string, tags: string) {
const addPhoto = gql`
mutation addPhoto(
$file: Upload!,
$description: String,
$tags: String
){
addPhoto(
file: $file,
description: $description,
tags: $tags
) {
id,
fileLocation,
description,
tags
}
}
`;
return this.apollo.mutate({
mutation: addPhoto,
variables: {
file,
description,
tags
},
context: {
useMultipart: true
}
})
}
editPhoto(id: number, file: File, description: string, tags: string) {
const editPhoto = gql`
mutation editPhoto(
$id: Int!,
$file: Upload!,
$description: String,
$tags: String
){
editPhoto(
id: $id,
file: $file,
description: $description,
tags: $tags
) {
id,
fileLocation,
description,
tags
}
}
`;
return this.apollo.mutate({
mutation: editPhoto,
variables: {
id,
file,
description,
tags
},
context: {
useMultipart: true
}
})
}
getPhotos(page: number = 1) {
const getPhotos = gql`
query getPhotos(
$page: Int,
){
getPhotos(
page: $page
) {
photos {
id,
fileLocation,
description,
tags
},
page,
totalPhotos
}
}
`;
return this.apollo.mutate({
mutation: getPhotos,
variables: {
page,
}
})
}
deletePhoto(id: number) {
const deletePhoto = gql`
mutation deletePhoto(
$id: Int,
){
deletePhoto(
id: $id
)
}
`;
return this.apollo.mutate({
mutation: deletePhoto,
variables: {
id,
}
})
}
searchPhotos(searchQuery: string) {
const getPhotos = gql`
query searchPhotos(
$searchQuery: String,
){
searchPhotos(
searchQuery: $searchQuery
) {
photos {
id,
fileLocation,
description,
tags
},
page,
totalPhotos
}
}
`;
return this.apollo.mutate({
mutation: getPhotos,
variables: {
searchQuery,
}
})
}
}
This makes use of the Apollo client we just added. The gql
before the query string is a tag that is parsed by the gql
tag into a query that Apollo can use.
The syntax is very close to the request example above, except that you pass in variables instead of numbers or strings. Files are also passed in as variables directly. The useMultipart: true
option in the context
object lets us upload files with Angular Apollo.
Then in edit-photo-dialog.component.ts
, we put:
import { Component, OnInit, Inject, ViewChild } from '@angular/core';
import { MatDialogRef, MAT_DIALOG_DATA } from '@angular/material';
import { PhotoService } from '../photo.service';
import { environment } from 'src/environments/environment';
import { Store, select } from '@ngrx/store';
import { SET_PHOTOS } from '../reducers/photos-reducer';
import { NgForm } from '@angular/forms';
@Component({
selector: 'app-edit-photo-dialog',
templateUrl: './edit-photo-dialog.component.html',
styleUrls: ['./edit-photo-dialog.component.scss']
})
export class EditPhotoDialogComponent implements OnInit {
@ViewChild('photoUpload', null) photoUpload: any;
photoArrayData: any[] = [];
constructor(
public dialogRef: MatDialogRef<EditPhotoDialogComponent>,
@Inject(MAT_DIALOG_DATA) public photoData: any,
private photoService: PhotoService,
private store: Store<any>
) {
store.pipe(select('photos'))
.subscribe(photos => {
this.photoArrayData = photos;
})
}
ngOnInit() {
}
clickUpload() {
this.photoUpload.nativeElement.click();
}
handleFileInput(files) {
console.log(files);
this.photoData.file = files[0];
}
save(uploadForm: NgForm) {
if (uploadForm.invalid || !this.photoData.file) {
return;
}
const {
id,
file,
description,
tags
} = this.photoData;
this.photoService.editPhoto(id, file, description, tags)
.subscribe(es => {
this.getPhotos();
})
}
getPhotos() {
this.photoService.getPhotos()
.subscribe(res => {
const photoArrayData = (res as any).data.getPhotos.photos.map(p => {
const { id, description, tags } = p;
const pathParts = p.fileLocation.split('/');
const photoPath = pathParts[pathParts.length - 1];
return {
id,
description,
tags,
photoUrl: `${environment.photosUrl}/${photoPath}`
}
});
this.store.dispatch({ type: SET_PHOTOS, payload: photoArrayData });
this.dialogRef.close()
})
}
}
This is code for the dialog box we create when users click edit on a row of photos. The photo data is passed in from the homepage, and they can be edited here. Once the user clicks the Save button, the save
function will be called and if that is success, it will call the getPhotos
function to get the latest photos and store it in the store.
Next in edit-photo-dialog.component.html
, we put:
<h2>Edit Photo</h2>
<form #photoForm='ngForm' (ngSubmit)='save(photoForm)'>
<div>
<input type="file" id="file" (change)="handleFileInput($event.target.files)" #photoUpload>
<button mat-raised-button (click)='clickUpload()' type='button'>
Upload Photo
</button>
{{photoData?.file?.name}}
</div>
<mat-form-field>
<input matInput placeholder="Description" required #description='ngModel' name='description'
#description='ngModel' [(ngModel)]='photoData.description'>
<mat-error *ngIf="description.invalid && (description.dirty || description.touched)">
<div *ngIf="description.errors.required">
Description is required.
</div>
</mat-error>
</mat-form-field>
<br>
<mat-form-field>
<input matInput placeholder="Tags" required #tags='ngModel' name='tags' [(ngModel)]='photoData.tags'
#tags='ngModel'>
<mat-error *ngIf="tags.invalid && (tags.dirty || tags.touched)">
<div *ngIf="tags.errors.required">
Tags is required.
</div>
</mat-error>
</mat-form-field>
<br>
<button mat-raised-button type='submit'>Save</button>
</form>
This allows the user to upload a new photo and edit the description and tags fields. And in edit-photo-dialog.component.scss
, we add:
#file {
display: none;
}
so that the file upload input is hidden. We invoke the upload dialog with a click to the button and get the file with the handleFileInput
handler.
Next we build the home page. In home-page.component.ts
, we put:
import { Component, OnInit, ViewChild } from '@angular/core';
import { PhotoService } from '../photo.service';
import { environment } from 'src/environments/environment';
import { NgForm } from '@angular/forms';
@Component({
selector: 'app-home-page',
templateUrl: './home-page.component.html',
styleUrls: ['./home-page.component.scss']
})
export class HomePageComponent implements OnInit {
photoUrls: string[] = [];
query: any = <any>{};
constructor(
private photoService: PhotoService
) { }
ngOnInit() {
this.getPhotos();
}
getPhotos() {
this.photoService.getPhotos()
.subscribe(res => {
this.photoUrls = (res as any).data.getPhotos.photos.map(p => {
const pathParts = p.fileLocation.split('/');
const photoPath = pathParts[pathParts.length - 1];
return `${environment.photosUrl}/${photoPath}`;
});
})
}
searchPhotos(searchForm: NgForm) {
if (searchForm.invalid) {
return;
}
this.searchPhotosQuery();
}
searchPhotosQuery() {
this.photoService.searchPhotos(this.query.search)
.subscribe(res => {
this.photoUrls = (res as any).data.searchPhotos.photos.map(p => {
const pathParts = p.fileLocation.split('/');
const photoPath = pathParts[pathParts.length - 1];
return `${environment.photosUrl}/${photoPath}`;
});
})
}
}
to get the photos that the user saved and allow users to search with the searchPhotosQuery
function. We will call the photoService
, which uses the Apollo client to make the request.
In home-page.component.html
, we put:
<form #searchForm='ngForm' (ngSubmit)='searchPhotos(searchForm)'>
<mat-form-field>
<input matInput placeholder="Search Photos" required #search='ngModel' name='search' [(ngModel)]='query.search'>
<mat-error *ngIf="search.invalid && (search.dirty || search.touched)">
<div *ngIf="search.errors.required">
Search query is required.
</div>
</mat-error>
</mat-form-field>
<br>
<button mat-raised-button type='submit'>Search</button>
</form>
<br>
<mat-grid-list cols="3" rowHeight="1:1">
<mat-grid-tile *ngFor='let p of photoUrls'>
<img [src]='p' class="tile-image">
</mat-grid-tile>
</mat-grid-list>
to display photos in a grid and let users search photos with a text input.
In home-page.component.scss
, we add:
.tile-image {
width: 100%;
height: auto;
}
to stretch the image to fit in the grid.
Next in the reducer
folder, we create two files, menu-reducer.ts
and photos-reducer.ts
, to make the reducers to store the state of our app. In menu-reducer.ts
, we put:
const TOGGLE_MENU = 'TOGGLE_MENU';
function menuReducer(state, action) {
switch (action.type) {
case TOGGLE_MENU:
state = action.payload;
return state;
default:
return state
}
}
export { menuReducer, TOGGLE_MENU };
And similarly in photos-reducer.ts
, we add:
const SET_PHOTOS = 'SET_PHOTOS';
function photosReducer(state, action) {
switch (action.type) {
case SET_PHOTOS:
state = action.payload;
return state;
default:
return state
}
}
export { photosReducer, SET_PHOTOS };
This stores the states of the left side menu and photos. In reducers/index.ts
, we put:
import { menuReducer } from './menu-reducer';
import { photosReducer } from './photos-reducer';
export const reducers = {
menu: menuReducer,
photos: photosReducer
};
This allows the reducers to be included in our app
module, allowing us to manipulate the state.
Next in the top-bar.component.ts
, we put:
import { Component, OnInit } from '@angular/core';
import { Store, select } from '@ngrx/store';
import { TOGGLE_MENU } from '../reducers/menu-reducer';
@Component({
selector: 'app-top-bar',
templateUrl: './top-bar.component.html',
styleUrls: ['./top-bar.component.scss']
})
export class TopBarComponent implements OnInit {
menuOpen: boolean;
constructor(
private store: Store<any>
) {
store.pipe(select('menu'))
.subscribe(menuOpen => {
this.menuOpen = menuOpen;
})
}
ngOnInit() {
}
toggleMenu() {
this.store.dispatch({ type: TOGGLE_MENU, payload: !this.menuOpen });
}
}
It has a toggleMenu
function to toggle the menu state and store the state.
In top-bar.component.html
, we put:
<mat-toolbar>
<a (click)='toggleMenu()' class="menu-button">
<i class="material-icons">
menu
</i>
</a>
Image Gallery App
</mat-toolbar>
This shows the toolbar.
In top-bar.component.scss
, we add:
.menu-button {
margin-top: 6px;
margin-right: 10px;
cursor: pointer;
}
.menu-button {
color: white;
}
.mat-toolbar-row,
.mat-toolbar-single-row {
height: 64px;
background-color: #fc036b;
color: white;
}
This makes the spacing look better.
In upload-page.component.ts
, we put:
import { Component, OnInit, ViewChild } from '@angular/core';
import { PhotoService } from '../photo.service';
import { environment } from 'src/environments/environment';
import { MatDialog } from '@angular/material';
import { EditPhotoDialogComponent } from '../edit-photo-dialog/edit-photo-dialog.component';
import { Store, select } from '@ngrx/store';
import { SET_PHOTOS } from '../reducers/photos-reducer';
import { NgForm } from '@angular/forms';
@Component({
selector: 'app-upload-page',
templateUrl: './upload-page.component.html',
styleUrls: ['./upload-page.component.scss']
})
export class UploadPageComponent implements OnInit {
photoData: any = <any>{};
photoArrayData: any[] = [];
page: number = 1;
totalPhotos: number = 0;
@ViewChild('photoUpload', null) photoUpload: any;
displayedColumns: string[] = [
'photoUrl',
'description',
'tags',
'edit',
'delete'
]
constructor(
private photoService: PhotoService,
public dialog: MatDialog,
private store: Store<any>
) {
store.pipe(select('photos'))
.subscribe(photos => {
this.photoArrayData = photos;
})
}
ngOnInit() {
this.getPhotos();
}
clickUpload() {
this.photoUpload.nativeElement.click();
}
handleFileInput(files) {
console.log(files);
this.photoData.file = files[0];
}
save(uploadForm: NgForm) {
if (uploadForm.invalid || !this.photoData.file) {
return;
}
const {
file,
description,
tags
} = this.photoData;
this.photoService.addPhoto(file, description, tags)
.subscribe(res => {
this.getPhotos();
})
}
getPhotos() {
this.photoService.getPhotos(this.page)
.subscribe(res => {
const photoArrayData = (res as any).data.getPhotos.photos.map(p => {
const { id, description, tags } = p;
const pathParts = p.fileLocation.split('/');
const photoPath = pathParts[pathParts.length - 1];
return {
id,
description,
tags,
photoUrl: `${environment.photosUrl}/${photoPath}`
}
});
this.page = (res as any).data.getPhotos.page;
this.totalPhotos = (res as any).data.getPhotos.totalPhotos;
this.store.dispatch({ type: SET_PHOTOS, payload: photoArrayData });
})
}
openEditDialog(index: number) {
const dialogRef = this.dialog.open(EditPhotoDialogComponent, {
width: '70vw',
data: this.photoArrayData[index] || {}
})
dialogRef.afterClosed().subscribe(result => {
console.log('The dialog was closed');
});
}
deletePhoto(index: number) {
const { id } = this.photoArrayData[index];
this.photoService.deletePhoto(id)
.subscribe(res => {
this.getPhotos();
})
}
}
This lets people upload their photos, open a dialog to edit, or delete photos.
It has a file input to take a file object and calls the photoService
to make the GraphQL requests to the API to manipulate the photo table entries. In upload-page.component.html
, we put:
<div class="center">
<h1>Manage Files</h1>
</div>
<h2>Add Photo</h2>
<form #photoForm='ngForm' (ngSubmit)='save(photoForm)'>
<div>
<input type="file" id="file" (change)="handleFileInput($event.target.files)" #photoUpload>
<button mat-raised-button (click)='clickUpload()' type='button'>
Upload Photo
</button>
{{photoData?.file?.name}}
</div>
<mat-form-field>
<input matInput placeholder="Description" required #description='ngModel' name='description'
#description='ngModel' [(ngModel)]='photoData.description'>
<mat-error *ngIf="description.invalid && (description.dirty || description.touched)">
<div *ngIf="description.errors.required">
Description is required.
</div>
</mat-error>
</mat-form-field>
<br>
<mat-form-field>
<input matInput placeholder="Tags" required #tags='ngModel' name='tags' [(ngModel)]='photoData.tags'
#tags='ngModel'>
<mat-error *ngIf="tags.invalid && (tags.dirty || tags.touched)">
<div *ngIf="tags.errors.required">
Tags is required.
</div>
</mat-error>
</mat-form-field>
<br>
<button mat-raised-button type='submit'>Save</button>
</form>
<br>
<h2>Manage Photos</h2>
<table mat-table [dataSource]="photoArrayData" class="mat-elevation-z8">
<ng-container matColumnDef="photoUrl">
<th mat-header-cell *matHeaderCellDef> Photo </th>
<td mat-cell *matCellDef="let photo">
<img [src]='photo.photoUrl' class="photo">
</td>
</ng-container>
<ng-container matColumnDef="description">
<th mat-header-cell *matHeaderCellDef> Description </th>
<td mat-cell *matCellDef="let photo"> {{photo.description}} </td>
</ng-container>
<ng-container matColumnDef="tags">
<th mat-header-cell *matHeaderCellDef> Tags </th>
<td mat-cell *matCellDef="let photo"> {{photo.tags}} </td>
</ng-container>
<ng-container matColumnDef="edit">
<th mat-header-cell *matHeaderCellDef> Edit </th>
<td mat-cell *matCellDef="let photo; let i = index">
<button mat-raised-button (click)='openEditDialog(i)'>Edit</button>
</td>
</ng-container>
<ng-container matColumnDef="delete">
<th mat-header-cell *matHeaderCellDef> Delete </th>
<td mat-cell *matCellDef="let photo; let i = index">
<button mat-raised-button (click)='deletePhoto(i)'>Delete</button>
</td>
</ng-container>
<tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
<tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr>
</table>
<mat-paginator [length]="totalPhotos" [pageSize]="10" [pageSizeOptions]="[10]"
(page)="page = $event.pageIndex + 1; getPhotos()">
</mat-paginator>
This shows a form to upload a photo and enter a description and tags with it, and it also displays a table row of photo data with edit and delete buttons in each row. Users can navigate through 10 photos per page with the paginator component at the bottom.
In upload-page.component.scss
, we put:
#file {
display: none;
}
table.mat-table,
.mat-paginator {
width: 92vw;
}
.photo {
width: 50px;
}
This hides the file upload input and changes the width of the table and paginator component to be the same.
Next in app-routing.module.ts
, we put:
import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
import { HomePageComponent } from './home-page/home-page.component';
import { UploadPageComponent } from './upload-page/upload-page.component';
const routes: Routes = [
{ path: '', component: HomePageComponent },
{ path: 'upload', component: UploadPageComponent },
];
@NgModule({
imports: [RouterModule.forRoot(routes)],
exports: [RouterModule]
})
export class AppRoutingModule { }
This routes the URLs to our pages we created.
In app.component.ts
, we put:
import { Component, HostListener } from '@angular/core';
import { Store, select } from '@ngrx/store';
import { TOGGLE_MENU } from './reducers/menu-reducer';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent {
menuOpen: boolean;
constructor(
private store: Store<any>,
) {
store.pipe(select('menu'))
.subscribe(menuOpen => {
this.menuOpen = menuOpen;
})
}
@HostListener('document:click', ['$event'])
public onClick(event) {
const isOutside = !event.target.className.includes("menu-button") &&
!event.target.className.includes("material-icons") &&
!event.target.className.includes("mat-drawer-inner-container")
if (isOutside) {
this.menuOpen = false;
this.store.dispatch({ type: TOGGLE_MENU, payload: this.menuOpen });
}
}
}
This adds the menu and the router-outlet
element to display routes we designated in app-routing.module.ts
.
In styles.scss
, we put:
/* You can add global styles to this file, and also import other style files */
@import "~@angular/material/prebuilt-themes/indigo-pink.css";
body {
font-family: "Roboto", sans-serif;
margin: 0;
}
form {
mat-form-field {
width: 95%;
margin: 0 auto;
}
}
.center {
text-align: center;
}
This includes the Angular Material CSS into our code.
In index.html
, we put:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Frontend</title>
<base href="/">
<link href="https://fonts.googleapis.com/css?family=Roboto&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="favicon.ico">
</head>
<body>
<app-root></app-root>
</body>
</html>
This changes the title and includes the Roboto font and Material icons in our app to display the icons.
Finally, in app.module.ts
, we put:
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import {
MatButtonModule,
MatCheckboxModule,
MatInputModule,
MatMenuModule,
MatSidenavModule,
MatToolbarModule,
MatTableModule,
MatDialogModule,
MatDatepickerModule,
MatSelectModule,
MatCardModule,
MatFormFieldModule,
MatGridListModule
} from '@angular/material';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { StoreModule } from '@ngrx/store';
import { reducers } from './reducers';
import { TopBarComponent } from './top-bar/top-bar.component';
import { FormsModule } from '@angular/forms';
import { HttpClientModule, HTTP_INTERCEPTORS } from '@angular/common/http';
import { HomePageComponent } from './home-page/home-page.component';
import { PhotoService } from './photo.service';
import { GraphQLModule } from './graphql.module';
import { UploadPageComponent } from './upload-page/upload-page.component';
import { MatPaginatorModule } from '@angular/material/paginator';
import { EditPhotoDialogComponent } from './edit-photo-dialog/edit-photo-dialog.component';
@NgModule({
declarations: [
AppComponent,
TopBarComponent,
HomePageComponent,
UploadPageComponent,
EditPhotoDialogComponent,
],
imports: [
BrowserModule,
AppRoutingModule,
FormsModule,
MatButtonModule,
StoreModule.forRoot(reducers),
BrowserAnimationsModule,
MatButtonModule,
MatCheckboxModule,
MatFormFieldModule,
MatInputModule,
MatMenuModule,
MatSidenavModule,
MatToolbarModule,
MatTableModule,
HttpClientModule,
MatDialogModule,
MatDatepickerModule,
MatSelectModule,
MatCardModule,
MatGridListModule,
GraphQLModule,
MatPaginatorModule
],
providers: [
PhotoService
],
bootstrap: [AppComponent],
entryComponents: [
EditPhotoDialogComponent
]
})
export class AppModule { }
This includes everything we need to run the app module. Note that we have EditPhotoDialogComponent
in entryComponents
. This is required for displaying the dialog box in another element.
After all that work, we get the following:
#nodejs #GraphQL #javascript
1595240610
Laravel 7 file/image upload via API using postman example tutorial. Here, you will learn how to upload files/images via API using postman in laravel app.
As well as you can upload images via API using postman in laravel apps and also you can upload images via api using ajax in laravel apps.
If you work with laravel apis and want to upload files or images using postman or ajax. And also want to validate files or images before uploading to server via API or ajax in laravel.
So this tutorial will guide you step by step on how to upload file vie API using postman and ajax in laravel with validation.
Follow the below given following steps and upload file vie apis using postman with validation in laravel apps:
Checkout Full Article here https://www.tutsmake.com/laravel-file-upload-via-api-example-from-scratch/
#uploading files via laravel api #laravel file upload api using postman #laravel image upload via api #upload image using laravel api #image upload api in laravel validation #laravel send file to api
1597559012
in this post, i will show you easy steps for multiple file upload in laravel 7, 6.
As well as how to validate file type, size before uploading to database in laravel.
You can easily upload multiple file with validation in laravel application using the following steps:
https://www.tutsmake.com/laravel-6-multiple-file-upload-with-validation-example/
#laravel multiple file upload validation #multiple file upload in laravel 7 #multiple file upload in laravel 6 #upload multiple files laravel 7 #upload multiple files in laravel 6 #upload multiple files php laravel
1595396220
As more and more data is exposed via APIs either as API-first companies or for the explosion of single page apps/JAMStack, API security can no longer be an afterthought. The hard part about APIs is that it provides direct access to large amounts of data while bypassing browser precautions. Instead of worrying about SQL injection and XSS issues, you should be concerned about the bad actor who was able to paginate through all your customer records and their data.
Typical prevention mechanisms like Captchas and browser fingerprinting won’t work since APIs by design need to handle a very large number of API accesses even by a single customer. So where do you start? The first thing is to put yourself in the shoes of a hacker and then instrument your APIs to detect and block common attacks along with unknown unknowns for zero-day exploits. Some of these are on the OWASP Security API list, but not all.
Most APIs provide access to resources that are lists of entities such as /users
or /widgets
. A client such as a browser would typically filter and paginate through this list to limit the number items returned to a client like so:
First Call: GET /items?skip=0&take=10
Second Call: GET /items?skip=10&take=10
However, if that entity has any PII or other information, then a hacker could scrape that endpoint to get a dump of all entities in your database. This could be most dangerous if those entities accidently exposed PII or other sensitive information, but could also be dangerous in providing competitors or others with adoption and usage stats for your business or provide scammers with a way to get large email lists. See how Venmo data was scraped
A naive protection mechanism would be to check the take count and throw an error if greater than 100 or 1000. The problem with this is two-fold:
skip = 0
while True: response = requests.post('https://api.acmeinc.com/widgets?take=10&skip=' + skip), headers={'Authorization': 'Bearer' + ' ' + sys.argv[1]}) print("Fetched 10 items") sleep(randint(100,1000)) skip += 10
To secure against pagination attacks, you should track how many items of a single resource are accessed within a certain time period for each user or API key rather than just at the request level. By tracking API resource access at the user level, you can block a user or API key once they hit a threshold such as “touched 1,000,000 items in a one hour period”. This is dependent on your API use case and can even be dependent on their subscription with you. Like a Captcha, this can slow down the speed that a hacker can exploit your API, like a Captcha if they have to create a new user account manually to create a new API key.
Most APIs are protected by some sort of API key or JWT (JSON Web Token). This provides a natural way to track and protect your API as API security tools can detect abnormal API behavior and block access to an API key automatically. However, hackers will want to outsmart these mechanisms by generating and using a large pool of API keys from a large number of users just like a web hacker would use a large pool of IP addresses to circumvent DDoS protection.
The easiest way to secure against these types of attacks is by requiring a human to sign up for your service and generate API keys. Bot traffic can be prevented with things like Captcha and 2-Factor Authentication. Unless there is a legitimate business case, new users who sign up for your service should not have the ability to generate API keys programmatically. Instead, only trusted customers should have the ability to generate API keys programmatically. Go one step further and ensure any anomaly detection for abnormal behavior is done at the user and account level, not just for each API key.
APIs are used in a way that increases the probability credentials are leaked:
If a key is exposed due to user error, one may think you as the API provider has any blame. However, security is all about reducing surface area and risk. Treat your customer data as if it’s your own and help them by adding guards that prevent accidental key exposure.
The easiest way to prevent key exposure is by leveraging two tokens rather than one. A refresh token is stored as an environment variable and can only be used to generate short lived access tokens. Unlike the refresh token, these short lived tokens can access the resources, but are time limited such as in hours or days.
The customer will store the refresh token with other API keys. Then your SDK will generate access tokens on SDK init or when the last access token expires. If a CURL command gets pasted into a GitHub issue, then a hacker would need to use it within hours reducing the attack vector (unless it was the actual refresh token which is low probability)
APIs open up entirely new business models where customers can access your API platform programmatically. However, this can make DDoS protection tricky. Most DDoS protection is designed to absorb and reject a large number of requests from bad actors during DDoS attacks but still need to let the good ones through. This requires fingerprinting the HTTP requests to check against what looks like bot traffic. This is much harder for API products as all traffic looks like bot traffic and is not coming from a browser where things like cookies are present.
The magical part about APIs is almost every access requires an API Key. If a request doesn’t have an API key, you can automatically reject it which is lightweight on your servers (Ensure authentication is short circuited very early before later middleware like request JSON parsing). So then how do you handle authenticated requests? The easiest is to leverage rate limit counters for each API key such as to handle X requests per minute and reject those above the threshold with a 429 HTTP response.
There are a variety of algorithms to do this such as leaky bucket and fixed window counters.
APIs are no different than web servers when it comes to good server hygiene. Data can be leaked due to misconfigured SSL certificate or allowing non-HTTPS traffic. For modern applications, there is very little reason to accept non-HTTPS requests, but a customer could mistakenly issue a non HTTP request from their application or CURL exposing the API key. APIs do not have the protection of a browser so things like HSTS or redirect to HTTPS offer no protection.
Test your SSL implementation over at Qualys SSL Test or similar tool. You should also block all non-HTTP requests which can be done within your load balancer. You should also remove any HTTP headers scrub any error messages that leak implementation details. If your API is used only by your own apps or can only be accessed server-side, then review Authoritative guide to Cross-Origin Resource Sharing for REST APIs
APIs provide access to dynamic data that’s scoped to each API key. Any caching implementation should have the ability to scope to an API key to prevent cross-pollution. Even if you don’t cache anything in your infrastructure, you could expose your customers to security holes. If a customer with a proxy server was using multiple API keys such as one for development and one for production, then they could see cross-pollinated data.
#api management #api security #api best practices #api providers #security analytics #api management policies #api access tokens #api access #api security risks #api access keys
1601381326
We’ve conducted some initial research into the public APIs of the ASX100 because we regularly have conversations about what others are doing with their APIs and what best practices look like. Being able to point to good local examples and explain what is happening in Australia is a key part of this conversation.
The method used for this initial research was to obtain a list of the ASX100 (as of 18 September 2020). Then work through each company looking at the following:
With regards to how the APIs are shared:
#api #api-development #api-analytics #apis #api-integration #api-testing #api-security #api-gateway
1604399880
I’ve been working with Restful APIs for some time now and one thing that I love to do is to talk about APIs.
So, today I will show you how to build an API using the API-First approach and Design First with OpenAPI Specification.
First thing first, if you don’t know what’s an API-First approach means, it would be nice you stop reading this and check the blog post that I wrote to the Farfetchs blog where I explain everything that you need to know to start an API using API-First.
Before you get your hands dirty, let’s prepare the ground and understand the use case that will be developed.
If you desire to reproduce the examples that will be shown here, you will need some of those items below.
To keep easy to understand, let’s use the Todo List App, it is a very common concept beyond the software development community.
#api #rest-api #openai #api-first-development #api-design #apis #restful-apis #restful-api