1660418880
Note: π We are working on a serverless version of WunderGraph. If you're interested, please see here.
WunderGraph is the Serverless API Developer Platform with a focus on Developer Experience.
At its core, WunderGraph combines the API Gateway pattern with the Backend for Frontend (BFF) pattern to create the perfect Developer Experience for working with APIs.
Take all your (micro-)services, Databases, File Storages, Identity Providers as well as 3rd party APIs and combine them into your own Firebase-like Developer Toolkit, without getting locked into a specific vendor.
Imagine that each of your applications could have its own dedicated BFF, while being able to share common logic across all your applications, that's the WunderGraph Experience.
The fastest way to get started with WunderGraph is to open a Gitpod. After bootstrapping the examples/simple is started.
You can also following the Quickstart (5 min) if you don't want to use Gitpod.
WunderGraph is made up of the three core components:
The auto-generated type-safe client can be used in any Node.js or TypeScript backend application (including serverless applications and microservices).
Note: WunderHub is our vision of the Package Manager for APIs. Like npm, but for APIs. Sign up for free!
WunderGraph is unique in its design, as we're not directly exposing GraphQL, but JSON-RPC. Combined with a generated Type-Safe client, this leads to a unique Developer Experience. You can learn more about the architecture of WunderGraph and why we've built it this way in the architecture section.
If you'd like to get a quick overview, have a look at these annotated example snippets.
This section provides a high-level overview of how WunderGraph works and its most consumer centric components. For a more thorough introduction, visit the architecture documentation.
After initializing your first WunderGraph application with npx @wundergraph/wunderctl init
, you have a NPM package and a .wundergraph
folder. This folder contains the following files:
wundergraph.config.ts
- The primary config file for your WunderGraph application. Add data-sources and more.wundergraph.operations.ts
- Configure authentication, caching and more for a specific or all operations.wundergraph.server.ts
- The hooks server to hook into different lifecycle events of your gateway.After configuring your data-sources, you can start writing operations. An operation is just a *.graphql
file. The name of the file will be the operation name. You can write queries, mutations and subscriptions that spans multiple data-sources. Each operation will be exposed securely via HTTP JSON-API through the WunderGraph gateway. After writing your operations, you can start deploying your WunderGraph application.
Read the CONTRIBUTING.md to learn how to contribute to WunderGraph.
We are thankful for any and all security reports. Please read the SECURITY.md to learn how to report any security concerns to WunderGraph.
We're a small but growing team of API Enthusiasts, thrilled to help you get the best Developer Experience of working with APIs. Our Support Plans are tailored to help your teams get the most out of WunderGraph. We love building close relationships with our customers, allowing us to continuously improve the product and iterate fast. Our sales team is available to talk with you about your project needs, pricing information, support plans, and custom-built features.
Use this Link to contact our sales team for a demo.
Author: wundergraph
Source code: https://github.com/wundergraph/wundergraph
License: Apache-2.0 license
#react-native #typescript #javascript #graphql
1660383480
GraphQL API server for Fantom Artion v2 - backend for Artion-Client-V2.
Build using make:
make
Create JSON config file by doc/config.example.json
example.
Requirements for run:
node.url
in config file.db
section of config file.shared_db
section.ipfs.url
ipfs.gateway
ipfs.gateway_bearer
(even when local IPFS node is used otherwise!)notification.sendgrid
section.Before first start you need to initialize the MongoDB database. If you want to use other than the official contracts on mainnet, you will need to update observed.json
appropriately first.
mongoimport --db=artion --collection=observed --file=doc/db/observed.json
mongoimport --db=artion --collection=status --file=doc/db/status.json
For the shared MongoDB database:
mongoimport --db=artionshared --collection=colcats --file=doc/db/colcats.json
mongoimport --db=artionshared --collection=collections --file=doc/db/collections.json
When configured, run the Artion api server:
build/artionapi -cfg my-config-file.json
For production deployment check systemd example in doc/systemd
to install the api server as systemd service.
As soon as the api server is started, you can access GraphiQL testing interface at http://localhost:7373/graphi.
To connect Artion-Client-V2 update the providers list in app.config.js
to use http://localhost:7373/graphql
.
Author: Fantom-foundation
Source code: https://github.com/Fantom-foundation/Artion-API-GraphQL
License: GPL-3.0 license
#fantom #blockchain #api #graphql #go #golang
1660302860
Full Stack boilerplate with JWT authentication.
Built with React, Typescript, Node, Express, GraphQL, PostgreSQL, Redis, and Webpack.
Uses custom hooks and code splitting optimization via route-based component lazy loading with the Suspense component.
Unexpired tokens on sign-out are stored in a redis list and checked against on all authentication attempts.
Clone the repo:
git clone https://github.com/scottjason/ts-boilerplate-graphql-postgres.git
Then cd into the root directory and run npm install
Add a .env
file in the root directory of the repo with the following, and update the values:
JWT_SECRET=enter your JWT secret, a long random string
DEV_ORIGIN=http://localhost:8080
PROD_ORIGIN=Enter your production origin
REDIS_URL=Enter your redis url ie redis://...
REDIS_TLS_URL=Enter your redis tls url ie rediss://...
DEV_DB_HOST=localhost
DEV_DB_USER=yourname
DEV_DB_PASSWORD=yourpassword
DEV_DB_NAME=testdb
DEV_DB_DIALECT=postgres
DEV_DB_MAX=5
DEV_DB_MIN=0
DEV_DB_ACQUIRE=30000
Then run npm run dev
to start development and your browser should open to http://localhost:8080
.
To build the production bundle, run npm run build
Deployed to Heroku, preview app.
MIT License
Copyright (c) 2022 Scott Jason
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Author: scottjason
Source code: https://github.com/scottjason/ts-boilerplate-graphql-postgres
License: MIT license
#react-native #typescript #javascript #postgresql #redis #node #graphql
1660231440
High performance API server for Fantom powered blockchain network.
Please check the release tags to get more details and to download previous releases.
This version connects with the Lachesis v.0.7.0-rc1. The SFC contract ABI bundled with the API is version 2.0.2-rc1.
The release brings new fluid delegations and rewards system. Each address is be able to delegate to multiple stakers. Delegation can be locked to certain time, at least 14 days and up to 1 year, to get higher rewards. Please check our website Fantom.Foundation and the Special Fee Contract repository for more details.
This is the version you want to be able to connect with Lachesis v.0.6.0-rc2. The SFC contract ABI bundled with this API release is the version 1.1.0-rc1. The release uses Lachesis API v0.6.0 which recognizes single delegation per address and no delegation locking.
Building apiserver
requires a Go (version 1.13 or later). You can install it using your favourite package manager. Once the dependencies are installed, run
go build -o ./build/apiserver ./cmd/apiserver
The build output is build/apiserver
executable.
You don't need to clone the project into $GOPATH, due to use of Go Modules you can use any location.
To run the API Server you need access to a RPC interface of a full Lachesis node. Please follow Lachesis instructions to build and run the node. Alternatively you can obtain access to a remotely running instance of Lachesis.
We recommend using local IPC channel for communication between a Lachesis node and the API Server for performance and security reasons. Please consider security implications of opening Lachesis RPC to outside access, especially if you enable "personal" commands on your node while keeping your account keys in the Lachesis key store.
Persistent data are stored in a MongoDB database. Going through the instalation and configuration process of MongoDB is out of scope here, please consult MongoDB manual to install and configure appropriate MongoDB environment for your deployment of the API server.
Author: Fantom-foundation
Source code: https://github.com/Fantom-foundation/opera-jet-api-graphql
License: MIT license
#fantom #blockchain #api #graphql
1660188780
Building a decentralized video sharing app with Arweave, Bundlr, GraphQL, and Next.js.
I recommend using either NVM or FNM for Node.js installation
2. Matic, Arbitrum, or Avalanche tokens
3. Metamask installed as a browser extension
4. Fund your Bundlr wallet here with around $1.00 of your preferred currency.
To get started, create a new Next.js application
npx create-next-app arweave-app
Next, change into the directory and install the dependencies using either NPM, Yarn, PNPM, or your favoriate package manager:
cd arweave-app
npm install @bundlr-network/client arweave @emotion/css ethers react-select
@emotion/css
- CSS in JavaScript library for styling
react-select
- select input control library for React
@bundlr-network/client
- JavaScript client for interacting with Bundlr network
arweave
- The Arweave JavaScript library
Now that the dependencies are installed, create a new file named context.js in the root directory. We will use this file to initialize some React context that we'll be using to provide global state between routes.
// context.js
import { createContext } from 'react'
export const MainContext = createContext()
Next, let's create a new page in the pages directory called _app.js.
Here, we want to get started by enabling the user to sign in to bundlr using their MetaMask wallet.
We'll pass this functionality and some state into other pages so that we can use it there.
Add the following code to pages/app.js:
// pages/_app.js
import '../styles/globals.css'
import { WebBundlr } from "@bundlr-network/client"
import { MainContext } from '../context'
import { useState, useRef } from 'react'
import { providers, utils } from 'ethers'
import { css } from '@emotion/css'
import Link from 'next/link'
function MyApp({ Component, pageProps }) {
const [bundlrInstance, setBundlrInstance] = useState()
const [balance, setBalance] = useState(0)
// set the base currency as matic (this can be changed later in the app)
const [currency, setCurrency] = useState('matic')
const bundlrRef = useRef()
// create a function to connect to bundlr network
async function initialiseBundlr() {
await window.ethereum.enable()
const provider = new providers.Web3Provider(window.ethereum);
await provider._ready()
const bundlr = new WebBundlr("https://node1.bundlr.network", currency, provider)
await bundlr.ready()
setBundlrInstance(bundlr)
bundlrRef.current = bundlr
fetchBalance()
}
// get the user's bundlr balance
async function fetchBalance() {
const bal = await bundlrRef.current.getLoadedBalance()
console.log('bal: ', utils.formatEther(bal.toString()))
setBalance(utils.formatEther(bal.toString()))
}
return (
<div>
<nav className={navStyle}>
<Link href="/">
<a>
<div className={homeLinkStyle}>
<p className={homeLinkTextStyle}>
ARWEAVE VIDEO
</p>
</div>
</a>
</Link>
</nav>
<div className={containerStyle}>
<MainContext.Provider value={{
initialiseBundlr,
bundlrInstance,
balance,
fetchBalance,
currency,
setCurrency
}}>
<Component {...pageProps} />
</MainContext.Provider>
</div>
<footer className={footerStyle}>
<Link href="/profile">
<a>
ADMIN
</a>
</Link>
</footer>
</div>
)
}
const navHeight = 80
const footerHeight = 70
const navStyle = css`
height: ${navHeight}px;
padding: 40px 100px;
border-bottom: 1px solid #ededed;
display: flex;
align-items: center;
`
const homeLinkStyle = css`
display: flex;
flex-direction: row;
align-items: center;
`
const homeLinkTextStyle = css`
font-weight: 200;
font-size: 28;
letter-spacing: 7px;
`
const footerStyle = css`
border-top: 1px solid #ededed;
height: ${footerHeight}px;
padding: 0px 40px;
display: flex;
align-items: center;
justify-content: center;
font-weight: 200;
letter-spacing: 1px;
font-size: 14px;
`
const containerStyle = css`
min-height: calc(100vh - ${navHeight + footerHeight}px);
width: 900px;
margin: 0 auto;
padding: 40px;
`
export default MyApp
What have we done here?
initialiseBundlr
fetchBalance
emotion
profile
page that has not yet been created.Next, let's run the app:
npm run dev
You should see the app load and have a header and a footer! πππ
Next, let's create the UI that will allow the user to choose the currency they'd like to use and connect to Bundlr.
To do so, create a new file in the pages directory named profile.js. Here, add the following code:
import { useState, useContext } from 'react'
import { MainContext } from '../context'
import { css } from '@emotion/css'
import Select from 'react-select'
// list of supported currencies: https://docs.bundlr.network/docs/currencies
const supportedCurrencies = {
matic: 'matic',
ethereum: 'ethereum',
avalanche: 'avalanche',
bnb: 'bnb',
arbitrum: 'arbitrum'
}
const currencyOptions = Object.keys(supportedCurrencies).map(v => {
return {
value: v, label: v
}
})
export default function Profile() {
// use context to get data and functions passed from _app.js
const { balance, bundlrInstance, initialiseBundlr, currency, setCurrency } = useContext(MainContext)
// if the user has not initialized bundlr, allow them to
if (!bundlrInstance) {
return (
<div>
<div className={selectContainerStyle} >
<Select
onChange={({ value }) => setCurrency(value)}
options={currencyOptions}
defaultValue={{ value: currency, label: currency }}
classNamePrefix="select"
instanceId="currency"
/>
<p>Currency: {currency}</p>
</div>
<div className={containerStyle}>
<button className={wideButtonStyle} onClick={initialiseBundlr}>Connect Wallet</button>
</div>
</div>
)
}
// once the user has initialized Bundlr, show them their balance
return (
<div>
<h3 className={balanceStyle}>π° Balance {Math.round(balance * 100) / 100}</h3>
</div>
)
}
const selectContainerStyle = css`
margin: 10px 0px 20px;
`
const containerStyle = css`
padding: 10px 20px;
display: flex;
justify-content: center;
`
const buttonStyle = css`
background-color: black;
color: white;
padding: 12px 40px;
border-radius: 50px;
font-weight: 700;
width: 180;
transition: all .35s;
cursor: pointer;
&:hover {
background-color: rgba(0, 0, 0, .75);
}
`
const wideButtonStyle = css`
${buttonStyle};
width: 380px;
`
const balanceStyle = css`
padding: 10px 25px;
background-color: rgba(0, 0, 0, .08);
border-radius: 30px;
display: inline-block;
width: 200px;
text-align: center;
`
In this file we've:
useContext
to get the functions and state variables defined in pages/app.jsNext let's test it out:
npm run dev
You should see a dropdown menu and be able to connect to Bundlr with your wallet! πππ
Next, let's add the code that will allow user's to upload and save a video to Arweave with Bundlr.
Create a new file named utils.js
in the root directory and add the following code:
import Arweave from 'arweave'
export const arweave = Arweave.init({})
export const APP_NAME = 'SOME_UNIQUE_APP_NAME'
Next, update pages/profile.js with the following code (new code is commented):
import { useState, useContext } from 'react'
import { MainContext } from '../context'
import { css } from '@emotion/css'
import Select from 'react-select'
// New imports
import { APP_NAME } from '../utils'
import { useRouter } from 'next/router'
import { utils } from 'ethers'
const supportedCurrencies = {
matic: 'matic',
ethereum: 'ethereum',
avalanche: 'avalanche',
bnb: 'bnb',
arbitrum: 'arbitrum'
}
const currencyOptions = Object.keys(supportedCurrencies).map(v => {
return {
value: v, label: v
}
})
export default function Profile() {
const { balance, bundlrInstance, initialiseBundlr, currency, setCurrency } = useContext(MainContext)
// New local state variables
const [file, setFile] = useState()
const [localVideo, setLocalVideo] = useState()
const [title, setTitle] = useState('')
const [description, setDescription] = useState('')
const [fileCost, setFileCost] = useState()
const [URI, setURI] = useState()
// router will allow us to programatically route after file upload
const router = useRouter()
// when the file is uploaded, save to local state and calculate cost
function onFileChange(e) {
const file = e.target.files[0]
if (!file) return
checkUploadCost(file.size)
if (file) {
const video = URL.createObjectURL(file)
setLocalVideo(video)
let reader = new FileReader()
reader.onload = function (e) {
if (reader.result) {
setFile(Buffer.from(reader.result))
}
}
reader.readAsArrayBuffer(file)
}
}
// save the video to Arweave
async function uploadFile() {
if (!file) return
const tags = [{ name: 'Content-Type', value: 'video/mp4' }]
try {
let tx = await bundlrInstance.uploader.upload(file, tags)
setURI(`http://arweave.net/${tx.data.id}`)
} catch (err) {
console.log('Error uploading video: ', err)
}
}
async function checkUploadCost(bytes) {
if (bytes) {
const cost = await bundlrInstance.getPrice(bytes)
setFileCost(utils.formatEther(cost.toString()))
}
}
// save the video and metadata to Arweave
async function saveVideo() {
if (!file || !title || !description) return
const tags = [
{ name: 'Content-Type', value: 'text/plain' },
{ name: 'App-Name', value: APP_NAME }
]
const video = {
title,
description,
URI,
createdAt: new Date(),
createdBy: bundlrInstance.address,
}
try {
let tx = await bundlrInstance.createTransaction(JSON.stringify(video), { tags })
await tx.sign()
const { data } = await tx.upload()
console.log(`http://arweave.net/${data.id}`)
setTimeout(() => {
router.push('/')
}, 2000)
} catch (err) {
console.log('error uploading video with metadata: ', err)
}
}
if (!bundlrInstance) {
return (
<div>
<div className={selectContainerStyle} >
<Select
onChange={({ value }) => setCurrency(value)}
options={currencyOptions}
defaultValue={{ value: currency, label: currency }}
classNamePrefix="select"
instanceId="currency"
/>
<p>Currency: {currency}</p>
</div>
<div className={containerStyle}>
<button className={wideButtonStyle} onClick={initialiseBundlr}>Connect Wallet</button>
</div>
</div>
)
}
{/* most of this UI is also new */}
return (
<div>
<h3 className={balanceStyle}>π° Balance {Math.round(balance * 100) / 100}</h3>
<div className={formStyle}>
<p className={labelStyle}>Add Video</p>
<div className={inputContainerStyle}>
<input
type="file"
onChange={onFileChange}
/>
</div>
{ /* if there is a video save to local state, display it */}
{
localVideo && (
<video key={localVideo} width="520" controls className={videoStyle}>
<source src={localVideo} type="video/mp4"/>
</video>
)
}
{/* display calculated upload cast */}
{
fileCost && <h4>Cost to upload: {Math.round((fileCost) * 1000) / 1000} MATIC</h4>
}
<button className={buttonStyle} onClick={uploadFile}>Upload Video</button>
{/* if there is a URI, then show the form to upload it */}
{
URI && (
<div>
<p className={linkStyle} >
<a target="_blank" rel="noopener noreferrer" href={URI}>{URI}</a>
</p>
<div className={formStyle}>
<p className={labelStyle}>Title</p>
<input className={inputStyle} onChange={e => setTitle(e.target.value)} placeholder='Video title' />
<p className={labelStyle}>Description</p>
<textarea placeholder='Video description' onChange={e => setDescription(e.target.value)} className={textAreaStyle} />
<button className={saveVideoButtonStyle} onClick={saveVideo}>Save Video</button>
</div>
</div>
)
}
</div>
</div>
)
}
const selectContainerStyle = css`
margin: 10px 0px 20px;
`
const containerStyle = css`
padding: 10px 20px;
display: flex;
justify-content: center;
`
const buttonStyle = css`
background-color: black;
color: white;
padding: 12px 40px;
border-radius: 50px;
font-weight: 700;
width: 180;
transition: all .35s;
cursor: pointer;
&:hover {
background-color: rgba(0, 0, 0, .75);
}
`
const wideButtonStyle = css`
${buttonStyle};
width: 380px;
`
const balanceStyle = css`
padding: 10px 25px;
background-color: rgba(0, 0, 0, .08);
border-radius: 30px;
display: inline-block;
width: 200px;
text-align: center;
`
// New Styles
const linkStyle = css`
margin: 15px 0px;
`
const inputContainerStyle = css`
margin: 0px 0px 15px;
`
const videoStyle = css`
margin-bottom: 20px;
`
const formStyle = css`
display: flex;
flex-direction: column;
align-items: flex-start;
padding: 20px 0px 0px;
`
const labelStyle = css`
margin: 0px 0px 5px;
`
const inputStyle = css`
padding: 12px 20px;
border-radius: 5px;
border: none;
outline: none;
background-color: rgba(0, 0, 0, .08);
margin-bottom: 15px;
`
const textAreaStyle = css`
${inputStyle};
width: 350px;
height: 90px;
`
const saveVideoButtonStyle = css`
${buttonStyle};
margin-top: 15px;
`
In this file we've done quite a bit!
getPrice
API from BundlrNext, let's try it out!
npm run dev
You should now be able to successfully upload a video to the permaweb! πππ
Now that we've uploaded a video, how can we view it?
We'll be using GraphQL to query for the video data from Arweave. Since we passed in a tag for APP_NAME
, we can use that tag to retrieve only the videos for our app.
Let's define the GraphQL query that we'll be using in utils.js:
export const query = { query: `{
transactions(
first: 50,
tags: [
{
name: "App-Name",
values: ["${APP_NAME}"]
},
{
name: "Content-Type",
values: ["text/plain"]
}
]
) {
edges {
node {
id
owner {
address
}
data {
size
}
block {
height
timestamp
}
tags {
name,
value
}
}
}
}
}`
}
We'll also need a function to fetch the video metadata itself from Arweave for each item returned from the GraphQL query. Add the following function to utils.js
:
export const createVideoMeta = async (node) => {
const ownerAddress = node.owner.address;
const height = node.block ? node.block.height : -1;
const timestamp = node.block ? parseInt(node.block.timestamp, 10) * 1000 : -1;
const postInfo = {
txid: node.id,
owner: ownerAddress,
height: height,
length: node.data.size,
timestamp: timestamp,
}
postInfo.request = await arweave.api.get(`/${node.id}`, { timeout: 10000 })
return postInfo;
}
Next, update pages/index.js with the following code:
import { query, arweave, createVideoMeta } from '../utils'
import { useEffect, useState } from 'react'
import { css } from '@emotion/css'
// basic exponential backoff in case of gateway timeout / error
const wait = (ms) => new Promise((res) => setTimeout(res, ms))
export default function Home() {
const [videos, setVideos] = useState([])
// when app loads, fetch videos
useEffect(() => {
getVidoes()
}, [])
// fetch data from Arweave
// map over data and fetch metadata for each video then save to local state
async function getVidoes(topicFilter = null, depth = 0) {
try {
const results = await arweave.api.post('/graphql', query)
.catch(err => {
console.error('GraphQL query failed')
throw new Error(err);
});
const edges = results.data.data.transactions.edges
const videos = await Promise.all(
edges.map(async edge => await createVideoMeta(edge.node))
)
let sorted = videos.sort((a, b) => new Date(b.request.data.createdAt) - new Date(a.request.data.createdAt))
sorted = sorted.map(s => s.request.data)
setVideos(sorted)
} catch (err) {
await wait(2 ** depth * 10)
getPostInfo(topicFilter, depth + 1)
console.log('error: ', err)
}
}
return (
<div className={containerStyle}>
{/* map over videos and display them in the UI */}
{
videos.map(video => (
<div className={videoContainerStyle} key={video.URI}>
<video key={video.URI} width="720px" height="405" controls className={videoStyle}>
<source src={video.URI} type="video/mp4"/>
</video>
<div className={titleContainerStyle}>
<h3 className={titleStyle}>{video.title}</h3>
</div>
<p className={descriptionStyle}>{video.description}</p>
</div>
))
}
</div>
)
}
const videoStyle = css`
background-color: rgba(0, 0, 0, .05);
box-shadow: rgba(0, 0, 0, 0.15) 0px 5px 15px 0px;
`
const containerStyle = css`
width: 720px;
margin: 0 auto;
padding: 40px 20px;
display: flex;
align-items: center;
flex-direction: column;
`
const titleContainerStyle = css`
display: flex;
justify-content: flex-start;
margin: 19px 0px 8px;
`
const videoContainerStyle = css`
display: flex;
flex-direction: column;
margin: 20px 0px 40px;
`
const titleStyle = css`
margin: 0;
fontSize: 30px;
`
const descriptionStyle = css`
margin: 0;
`
In this file we've:
getVidoes
that calls the GraphQL API and returns the video dataCongratulations, you've just built a full stack decentralized video app! πππ
To deploy your application to the permaweb, you need to export it as HTML / a single page app.
To do this, add the following to your scripts
in package.json:
"export": "next build && next export"
Next, run the export command to export your app:
npm run export
π‘ Consider deploying your entire app to Arweave. You can do this manually with arkb, or use tools like SpheronHQ to make it easier with things like DNS support.
You can deploy with arkb
by using the use-bundler
flag:
arkb deploy . --wallet ../wallet --use-bundler http://bundler.arweave.net:10000
π‘ Consider adding filtering by tags, enabling users to add tags and then filter them based on a topic or tag. See this repo for a reference
π‘ Consider implementing a social graph with Lens Protocol
Author: dabit3
Source code: https://github.com/dabit3/arweave-workshop
#next #nextjs #react #javscript #web3 #metamask #blockchain #soldiity #graphql #arweave
1660120658
Powering the Front-end with React, GraphQL and Relay. Technologies like React, GraphQL and Relay play a tremendous role in shifting not only the way we build apps but also how we write code.
Nowadays, fetching and managing data has become the critical pathway of many apps, whether youβre trying to simplify and speed up development or looking for the best user experience.
If youβre new to this ecosystem, donβt panic! Iβll share with you some tips and walk through some hands-on examples to show you how easy it is to get started and open a world of possibilities.
#react #graphql #relay
1660062540
A clone of h@ckernews built with React, Node.js and GraphQL following the tutorials on https://howtographql.com
Built with the following technlogies and libraries
git clone https://github.com/marconunnari/hackernews-clone
cd hackernews-clone
cd backend
yarn
yarn prisma migrate dev
yarn start
cd frontend
yarn
yarn start
Author: marconunnari
Source code: https://github.com/marconunnari/hackernews-clone
1659864540
Basecode is a full-stack scaffolding generator for Kotlin, Spring Boot, GraphQL, React (NextJS) and PostgreSQL.
Basecode introduces the concept of "non-intrusive relational scaffolding", which is designed to keep your code maintainable, even for entities with 1-N relationships.
make sure you have the Go 1.16 or later installed. Then run:
go install github.com/basecode/cmd/basecode@latest
Provided that basecode is available under the alias basecode
, you can create a new project using basecode new
.
For example:
basecode new com.mycorp blog
Using basecode generate
, you can generate code based using one of the following generators.
backend:scaffold, bes Backend Scaffold
backend:model, bem Model files, including migration script, entity and repository
backend:api, bea GraphQL API (schema and resolvers)
backend:service, bsv Service between API and repository
frontend, fe Frontend Support
frontend:scaffold, fes Frontend Scaffold (Generate frontend support first)
scaffold, s Backend and Frontend Scaffold (Generate frontend support first)
For more information about the generators, use -h
:
basecode generate scaffold -h
In most cases, you will want to use scaffold
. This generator takes a model name and a list of field names and types and will generate backend and frontend code for the model you specified. For example:
basecode generate scaffold Post title:string description:text
Available types:
For each of these types, you can add ?
to make this type optional. For example: title:string?
.
Example
When you want, for example, to generate a blog, you can do that as following:
basecode new com.mycorp blog
cd blog
basecode generate scaffold Post title
basecode generate scaffold Comment postId:Post comment
Most generators specify the following parameters:
-d, --delete
-h, --help
-o, --overwrite
Here:
delete
will undo the file generation. This command may also additional generate files, such as migration scripts for dropping a previously created table.overwrite
will overwrite any existing files. When this option is not specified, Basecode will abort when a file is about to be overwritten.For developing your application, you can use docker-compose up
to spin up a development database. You can then either start the backend using your IDE by running the main
method in the Application.kt
file, or start the Spring Boot server using ./mvnw spring-boot:run
. You should be able to access your GraphQL dashboard at: http://localhost:8080/graphiql
.
To start the frontend, make sure your artifacts are installed using npm install
and run npm run dev
.
When both the backend and frontend are running, you can build your next best thing at: http://localhost:3000
. (Note: as of yet, there is no index page). If you created an entity called Post
, you will find your scaffolds at: http://localhost:3000/posts
.
Author: wnederhof
Source code: https://github.com/wnederhof/basecode
License: MIT license
#spring #springboot #java #postgresql #graphql #react
1659760362
Getting started with GraphQL in .NET - In this tutorial, you'll learn what GraphQL is, how to build a GraphQL API with Hot Chocolate on ASP.Net Core.
GraphQL is a great way to expose your APIs, and it has changed the way we think about consuming data over HTTP. Not only does GraphQL give us the power to ask for exactly what we want, but it also exposes data in a way that is more aligned with the way we think about data.
Over the last two years, GraphQL has become more and more mainstream. The ecosystem has grown phenomenally, and major players like Amazon, Twitter, Facebook, and more are all committed to GraphQL.
But what is GraphQL? What are the benefits of using GraphQL it instead of REST?
Together, we will look at the core problems that we are facing with the traditional REST service layers, which still power most of the Web.
After we have a better understanding of GraphQL, we will explore how we can build a GraphQL API with Hot Chocolate on ASP.Net Core. We will look at Prisma filters and how we can get your existing infrastructure under this new service layer. We will merge data from different sources like you did not think was possible by using the power of the GraphQL resolver concept.
#graphql #dotnet #api
1659753480
A Ruby implementation of GraphQL.
Install from RubyGems by adding it to your Gemfile
, then bundling.
# Gemfile
gem 'graphql'
$ bundle install
$ rails generate graphql:install
After this, you may need to run bundle install
again, as by default graphiql-rails is added on installation.
Or, see "Getting Started".
I also sell GraphQL::Pro which provides several features on top of the GraphQL runtime, including Pundit authorization, CanCan authorization, Pusher-based subscriptions and persisted queries. Besides that, Pro customers get email support and an opportunity to support graphql-ruby's development!
Author: rmosolgo
Source code: https://github.com/rmosolgo/graphql-ruby
License: MIT license
1659746160
This gem provides a field-level authorization for graphql-ruby.
Define a GraphQL schema:
# Define a type
class PostType < GraphQL::Schema::Object
field :id, ID, null: false
field :title, String, null: true
end
# Define a query
class QueryType < GraphQL::Schema::Object
field :posts, [PostType], null: false do
argument :user_id, ID, required: true
end
def posts(user_id:)
Post.where(user_id: user_id)
end
end
# Define a schema
class Schema < GraphQL::Schema
use GraphQL::Execution::Interpreter
use GraphQL::Analysis::AST
query QueryType
end
# Execute query
Schema.execute(query, variables: { userId: 1 }, context: { current_user: current_user })
Add GraphQL::Guard
to your schema:
class Schema < GraphQL::Schema
use GraphQL::Execution::Interpreter
use GraphQL::Analysis::AST
query QueryType
use GraphQL::Guard.new
end
Now you can define guard
for a field, which will check permissions before resolving the field:
class QueryType < GraphQL::Schema::Object
field :posts, [PostType], null: false do
argument :user_id, ID, required: true
guard ->(obj, args, ctx) { args[:user_id] == ctx[:current_user].id }
end
...
end
You can also define guard
, which will be executed for every *
field in the type:
class PostType < GraphQL::Schema::Object
guard ->(obj, args, ctx) { ctx[:current_user].admin? }
...
end
If guard
block returns nil
or false
, then it'll raise a GraphQL::Guard::NotAuthorizedError
error.
Alternatively, it's possible to extract and describe all policies by using PORO (Plain Old Ruby Object), which should implement a guard
method. For example:
class GraphqlPolicy
RULES = {
QueryType => {
posts: ->(obj, args, ctx) { args[:user_id] == ctx[:current_user].id }
},
PostType => {
'*': ->(obj, args, ctx) { ctx[:current_user].admin? }
}
}
def self.guard(type, field)
RULES.dig(type, field)
end
end
Pass this object to GraphQL::Guard
:
class Schema < GraphQL::Schema
use GraphQL::Execution::Interpreter
use GraphQL::Analysis::AST
query QueryType
use GraphQL::Guard.new(policy_object: GraphqlPolicy)
end
When using a policy object, you may want to allow introspection queries to skip authorization. A simple way to avoid having to whitelist every introspection type in the RULES
hash of your policy object is to check the type
parameter in the guard
method:
def self.guard(type, field)
type.introspection? ? ->(_obj, _args, _ctx) { true } : RULES.dig(type, field) # or "false" to restrict an access
end
GraphQL::Guard
will use the policy in the following order of priority:
class GraphqlPolicy
RULES = {
PostType => {
'*': ->(obj, args, ctx) { ctx[:current_user].admin? }, # <=== 4
title: ->(obj, args, ctx) { ctx[:current_user].admin? } # <=== 2
}
}
def self.guard(type, field)
RULES.dig(type, field)
end
end
class PostType < GraphQL::Schema::Object
guard ->(obj, args, ctx) { ctx[:current_user].admin? } # <=== 3
field :title, String, null: true, guard: ->(obj, args, ctx) { ctx[:current_user].admin? } # <=== 1
end
class Schema < GraphQL::Schema
use GraphQL::Execution::Interpreter
use GraphQL::Analysis::AST
query QueryType
use GraphQL::Guard.new(policy_object: GraphqlPolicy)
end
You can simply reuse your existing policies if you really want. You don't need any monkey patches or magic for it ;)
# Define an ability
class Ability
include CanCan::Ability
def initialize(user)
user ||= User.new
if user.admin?
can :manage, :all
else
can :read, Post, author_id: user.id
end
end
end
# Use the ability in your guard
class PostType < GraphQL::Schema::Object
guard ->(post, args, ctx) { ctx[:current_ability].can?(:read, post) }
...
end
# Pass the ability
Schema.execute(query, context: { current_ability: Ability.new(current_user) })
# Define a policy
class PostPolicy < ApplicationPolicy
def show?
user.admin? || record.author_id == user.id
end
end
# Use the ability in your guard
class PostType < GraphQL::Schema::Object
guard ->(post, args, ctx) { PostPolicy.new(ctx[:current_user], post).show? }
...
end
# Pass current_user
Schema.execute(query, context: { current_user: current_user })
By default GraphQL::Guard
raises a GraphQL::Guard::NotAuthorizedError
exception if access to the field is not authorized. You can change this behavior, by passing custom not_authorized
lambda. For example:
class SchemaWithErrors < GraphQL::Schema
use GraphQL::Execution::Interpreter
use GraphQL::Analysis::AST
query QueryType
use GraphQL::Guard.new(
# By default it raises an error
# not_authorized: ->(type, field) do
# raise GraphQL::Guard::NotAuthorizedError.new("#{type}.#{field}")
# end
# Returns an error in the response
not_authorized: ->(type, field) do
GraphQL::ExecutionError.new("Not authorized to access #{type}.#{field}")
end
)
end
In this case executing a query will continue, but return nil
for not authorized field and also an array of errors
:
SchemaWithErrors.execute("query { posts(user_id: 1) { id title } }")
# => {
# "data" => nil,
# "errors" => [{
# "messages" => "Not authorized to access Query.posts",
# "locations": { "line" => 1, "column" => 9 },
# "path" => ["posts"]
# }]
# }
In more advanced cases, you may want not to return errors
only for some unauthorized fields. Simply return nil
if user is not authorized to access the field. You can achieve it, for example, by placing the logic into your PolicyObject
:
class GraphqlPolicy
RULES = {
PostType => {
'*': {
guard: ->(obj, args, ctx) { ... },
not_authorized: ->(type, field) { GraphQL::ExecutionError.new("Not authorized to access #{type}.#{field}") }
}
title: {
guard: ->(obj, args, ctx) { ... },
not_authorized: ->(type, field) { nil } # simply return nil if not authorized, no errors
}
}
}
def self.guard(type, field)
RULES.dig(type, field, :guard)
end
def self.not_authorized_handler(type, field)
RULES.dig(type, field, :not_authorized) || RULES.dig(type, :'*', :not_authorized)
end
end
class Schema < GraphQL::Schema
use GraphQL::Execution::Interpreter
use GraphQL::Analysis::AST
query QueryType
mutation MutationType
use GraphQL::Guard.new(
policy_object: GraphqlPolicy,
not_authorized: ->(type, field) {
handler = GraphqlPolicy.not_authorized_handler(type, field)
handler.call(type, field)
}
)
end
It's possible to hide fields from being introspectable and accessible based on the context. For example:
class PostType < GraphQL::Schema::Object
field :id, ID, null: false
field :title, String, null: true do
# The field "title" is accessible only for beta testers
mask ->(ctx) { ctx[:current_user].beta_tester? }
end
end
Add this line to your application's Gemfile:
gem 'graphql-guard'
And then execute:
$ bundle
Or install it yourself as:
$ gem install graphql-guard
It's possible to test fields with guard
in isolation:
# Your type
class QueryType < GraphQL::Schema::Object
field :posts, [PostType], null: false, guard ->(obj, args, ctx) { ... }
end
# Your test
require "graphql/guard/testing"
posts = QueryType.field_with_guard('posts')
result = posts.guard(obj, args, ctx)
expect(result).to eq(true)
If you would like to test your fields with policy objects:
# Your type
class QueryType < GraphQL::Schema::Object
field :posts, [PostType], null: false
end
# Your policy object
class GraphqlPolicy
def self.guard(type, field)
->(obj, args, ctx) { ... }
end
end
# Your test
require "graphql/guard/testing"
posts = QueryType.field_with_guard('posts', GraphqlPolicy)
result = posts.guard(obj, args, ctx)
expect(result).to eq(true)
After checking out the repo, run bin/setup
to install dependencies. Then, run rake spec
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
. To release a new version, update the version number in version.rb
, and then run bundle exec rake release
, which will create a git tag for the version, push git commits and tags, and push the .gem
file to rubygems.org.
Bug reports and pull requests are welcome on GitHub at https://github.com/exAspArk/graphql-guard. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.
Everyone interacting in the Graphql::Guard projectβs codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.
Author: exAspArk
Source code: https://github.com/exAspArk/graphql-guard
License: MIT license
1659738720
GraphQL Client is a Ruby library for declaring, composing and executing GraphQL queries.
Add graphql-client
to your Gemfile and then run bundle install
.
# Gemfile
gem 'graphql-client'
Sample configuration for a GraphQL Client to query from the SWAPI GraphQL Wrapper.
require "graphql/client"
require "graphql/client/http"
# Star Wars API example wrapper
module SWAPI
# Configure GraphQL endpoint using the basic HTTP network adapter.
HTTP = GraphQL::Client::HTTP.new("https://example.com/graphql") do
def headers(context)
# Optionally set any HTTP headers
{ "User-Agent": "My Client" }
end
end
# Fetch latest schema on init, this will make a network request
Schema = GraphQL::Client.load_schema(HTTP)
# However, it's smart to dump this to a JSON file and load from disk
#
# Run it from a script or rake task
# GraphQL::Client.dump_schema(SWAPI::HTTP, "path/to/schema.json")
#
# Schema = GraphQL::Client.load_schema("path/to/schema.json")
Client = GraphQL::Client.new(schema: Schema, execute: HTTP)
end
If you haven't already, familiarize yourself with the GraphQL query syntax. Queries are declared with the same syntax inside of a <<-'GRAPHQL'
heredoc. There isn't any special query builder Ruby DSL.
This client library encourages all GraphQL queries to be declared statically and assigned to a Ruby constant.
HeroNameQuery = SWAPI::Client.parse <<-'GRAPHQL'
query {
hero {
name
}
}
GRAPHQL
Queries can reference variables that are passed in at query execution time.
HeroFromEpisodeQuery = SWAPI::Client.parse <<-'GRAPHQL'
query($episode: Episode) {
hero(episode: $episode) {
name
}
}
GRAPHQL
Fragments are declared similarly.
HumanFragment = SWAPI::Client.parse <<-'GRAPHQL'
fragment on Human {
name
homePlanet
}
GRAPHQL
To include a fragment in a query, reference the fragment by constant.
HeroNameQuery = SWAPI::Client.parse <<-'GRAPHQL'
{
luke: human(id: "1000") {
...HumanFragment
}
leia: human(id: "1003") {
...HumanFragment
}
}
GRAPHQL
This works for namespaced constants.
module Hero
Query = SWAPI::Client.parse <<-'GRAPHQL'
{
luke: human(id: "1000") {
...Human::Fragment
}
leia: human(id: "1003") {
...Human::Fragment
}
}
GRAPHQL
end
::
is invalid in regular GraphQL syntax, but #parse
makes an initial pass on the query string and resolves all the fragment spreads with constantize
.
Pass the reference of a parsed query definition to GraphQL::Client#query
. Data is returned back in a wrapped GraphQL::Client::Schema::ObjectType
struct that provides Ruby-ish accessors.
result = SWAPI::Client.query(Hero::Query)
# The raw data is Hash of JSON values
# result["data"]["luke"]["homePlanet"]
# The wrapped result allows to you access data with Ruby methods
result.data.luke.home_planet
GraphQL::Client#query
also accepts variables and context parameters that can be leveraged by the underlying network executor.
result = SWAPI::Client.query(Hero::HeroFromEpisodeQuery, variables: {episode: "JEDI"}, context: {user_id: current_user_id})
If you're using Ruby on Rails ERB templates, theres a ERB extension that allows static queries to be defined in the template itself.
In standard Ruby you can simply assign queries and fragments to constants and they'll be available throughout the app. However, the contents of an ERB template is compiled into a Ruby method, and methods can't assign constants. So a new ERB tag was extended to declare static sections that include a GraphQL query.
<%# app/views/humans/human.html.erb %>
<%graphql
fragment HumanFragment on Human {
name
homePlanet
}
%>
<p><%= human.name %> lives on <%= human.home_planet %>.</p>
These <%graphql
sections are simply ignored at runtime but make their definitions available through constants. The module namespacing is derived from the .erb
's path plus the definition name.
>> "views/humans/human".camelize
=> "Views::Humans::Human"
>> Views::Humans::Human::HumanFragment
=> #<GraphQL::Client::FragmentDefinition>
github/github-graphql-rails-example is an example application using this library to implement views on the GitHub GraphQL API.
Add graphql-client
to your app's Gemfile:
gem 'graphql-client'
Author: github
Source code: https://github.com/github/graphql-client
License: MIT license
1659731280
Provides an executor for the graphql
gem which allows queries to be batched.
Add this line to your application's Gemfile:
gem 'graphql-batch'
And then execute:
$ bundle
Or install it yourself as:
$ gem install graphql-batch
Require the library
require 'graphql/batch'
Define a custom loader, which is initialized with arguments that are used for grouping and a perform method for performing the batch load.
class RecordLoader < GraphQL::Batch::Loader
def initialize(model)
@model = model
end
def perform(ids)
@model.where(id: ids).each { |record| fulfill(record.id, record) }
ids.each { |id| fulfill(id, nil) unless fulfilled?(id) }
end
end
Use GraphQL::Batch
as a plugin in your schema after specifying the mutation so that GraphQL::Batch
can extend the mutation fields to clear the cache after they are resolved.
class MySchema < GraphQL::Schema
query MyQueryType
mutation MyMutationType
use GraphQL::Batch
end
The loader class can be used from the resolver for a graphql field by calling .for
with the grouping arguments to get a loader instance, then call .load
on that instance with the key to load.
field :product, Types::Product, null: true do
argument :id, ID, required: true
end
def product(id:)
RecordLoader.for(Product).load(id)
end
The loader also supports batch loading an array of records instead of just a single record, via load_many
. For example:
field :products, [Types::Product, null: true], null: false do
argument :ids, [ID], required: true
end
def products(ids:)
RecordLoader.for(Product).load_many(ids)
end
Although this library doesn't have a dependency on active record, the examples directory has record and association loaders for active record which handles edge cases like type casting ids and overriding GraphQL::Batch::Loader#cache_key to load associations on records with the same id.
GraphQL::Batch::Loader#load returns a Promise using the promise.rb gem to provide a promise based API, so you can transform the query results using .then
def product_title(id:)
RecordLoader.for(Product).load(id).then do |product|
product.title
end
end
You may also need to do another query that depends on the first one to get the result, in which case the query block can return another query.
def product_image(id:)
RecordLoader.for(Product).load(id).then do |product|
RecordLoader.for(Image).load(product.image_id)
end
end
If the second query doesn't depend on the first one, then you can use Promise.all, which allows each query in the group to be batched with other queries.
def all_collections
Promise.all([
CountLoader.for(Shop, :smart_collections).load(context.shop_id),
CountLoader.for(Shop, :custom_collections).load(context.shop_id),
]).then do |results|
results.reduce(&:+)
end
end
.then
can optionally take two lambda arguments, the first of which is equivalent to passing a block to .then
, and the second one handles exceptions. This can be used to provide a fallback
def product(id:)
# Try the cache first ...
CacheLoader.for(Product).load(id).then(nil, lambda do |exc|
# But if there's a connection error, go to the underlying database
raise exc unless exc.is_a?(Redis::BaseConnectionError)
logger.warn err.message
RecordLoader.for(Product).load(id)
end)
end
Your loaders can be tested outside of a GraphQL query by doing the batch loads in a block passed to GraphQL::Batch.batch
. That method will set up thread-local state to store the loaders, batch load any promise returned from the block then clear the thread-local state to avoid leaking state between tests.
def test_single_query
product = products(:snowboard)
title = GraphQL::Batch.batch do
RecordLoader.for(Product).load(product.id).then(&:title)
end
assert_equal product.title, title
end
After checking out the repo, run bin/setup
to install dependencies. Then, run rake test
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
See our contributing guidelines for more information.
Author: Shopify
Source code: https://github.com/Shopify/graphql-batch
License: MIT license
1659628380
Bindings for mobx-state-tree and GraphQL
π Installation π
Installation: yarn add mobx mobx-state-tree mobx-react react react-dom mst-gql graphql-request
If you want to use graphql tags, also install: yarn add graphql graphql-tag
π©βπ Why π©βπ
Watch the introduction talk @ react-europe 2019: Data models all the way by Michel Weststrate
Both GraphQL and mobx-state-tree are model-first driven approaches, so they have a naturally matching architecture. If you are tired of having your data shapes defined in GraphQL, MobX-state-tree and possible TypeScript as well, this project might be a great help!
Furthermore, this project closes the gap between GraphQL and mobx-state-tree as state management solutions. GraphQL is very transport oriented, while MST is great for client side state management. GraphQL clients like apollo do support some form of client-side state, but that is still quite cumbersome compared to the full model driven power unlocked by MST, where local actions, reactive views, and MobX optimized rendering model be used.
Benefits:
π Overview & getting started π
The mst-gql
libraries consists of two parts:
The scaffolder is a compile-time utility that generates a MST store and models based on the type information provided by your endpoint. This utility doesn't just generate models for all your types, but also query, mutation and subscription code base on the data statically available.
The runtime library is configured by the scaffolder, and provides entry points to use the generated or hand-written queries, React components, and additional utilities you want to mixin to your stores.
To get started, after installing mst-gql and its dependencies, the first task is to scaffold your store and runtime models based on your graphql endpoint.
To scaffold TypeScript models based on a locally running graphQL endpoint on port 4000, run: yarn mst-gql --format ts http://localhost:4000/graphql
. There are several additional args that can be passed to the CLI or put in a config file. Both are detailed below.
Tip: Note that API descriptions found in the graphQL endpoint will generally end up in the generated code, so make sure to write them!
After running the scaffolder, a bunch of files will be generated in the src/models/
directory of your project (or whatever path your provided):
(Files marked β can and should be edited. They won't be overwritten when you scaffold unless you use the force
option.)
index
- A barrel file that exposes all interesting things generatedRootStore.base
- A mobx-state-tree store that acts as a graphql client. Provides the following:.query
, .mutate
and .subscribe
low-level api's to run graphql queries.queryXXX
,.mutateXXX
and .subscribeXXX
actions based on the query definitions found in your graphQL endpointRootStore
- Extends RootStore.base
with any custom logic. This is the version we actually export and use.ModelBase
- Extends mst-gql's abstract model type with any custom logic, to be inherited by every concrete model type.XXXModel.base
mobx-state-tree types per type found in the graphQL endpoint. These inherit from ModelBase and expose the following things:xxxPrimitives
query fragment, that can be used as selector to obtain all the primitive fields of an object typetype
that describes the runtime type of a model instance. These are useful to type parameters and react component propertiesXXXModel
- Extends XXXModdel.base
with any custom logic. Again, this is the version we actually use.reactUtils
. This is a set of utilities to be used in React, exposing the following:StoreContext
: a strongly typed React context, that can be used to make the RootStore
available through your appuseQuery
: A react hook that can be used to render queries, mutations etc. It is bound to the StoreContext
automatically.The following graphQL schema will generate the store and message as shown below:
type User {
id: ID
name: String!
avatar: String!
}
type Message {
id: ID
user: User!
text: String!
}
type Query {
messages: [Message]
message(id: ID!): Message
me: User
}
type Subscription {
newMessages: Message
}
type Mutation {
changeName(id: ID!, name: String!): User
}
MessageModel.base.ts
(shortened):
export const MessageModelBase = ModelBase.named("Message").props({
__typename: types.optional(types.literal("Message"), "Message"),
id: types.identifier,
user: types.union(types.undefined, MSTGQLRef(types.late(() => User))),
text: types.union(types.undefined, types.string)
})
RootStore.base.ts
(shortened):
export const RootStoreBase = MSTGQLStore.named("RootStore")
.props({
messages: types.optional(types.map(types.late(() => Message)), {}),
users: types.optional(types.map(types.late(() => User)), {})
})
.actions((self) => ({
queryMessages(
variables?: {},
resultSelector = messagePrimitives,
options: QueryOptions = {}
) {
// implementation omitted
},
mutateChangeName(
variables: { id: string; name: string },
resultSelector = userPrimitives,
optimisticUpdate?: () => void
) {
// implementation omitted
}
}))
(Yes, that is a lot of code. A lot of code that you don't have to write π)
Note that the mutations and queries are now strongly typed! The parameters will be type checked, and the return types of the query methods are correct. Nonetheless, you will often write wrapper methods around those generated actions, to, for example, define the fragments of the result set that should be retrieved.
To prepare your app to use the RootStore
, it needs to be initialized, which is pretty straight forward, so here is quick example of what an entry file might look like:
// 1
import React from "react"
import * as ReactDOM from "react-dom"
import "./index.css"
import { App } from "./components/App"
// 2
import { createHttpClient } from "mst-gql"
import { RootStore, StoreContext } from "./models"
// 3
const rootStore = RootStore.create(undefined, {
gqlHttpClient: createHttpClient("http://localhost:4000/graphql")
})
// 4
ReactDOM.render(
<StoreContext.Provider value={rootStore}>
<App />
</StoreContext.Provider>,
document.getElementById("root")
)
// 5
window.store = rootStore
rootStore
, which, in typical MST fashion, takes 2 arguments:undefined
, but one could rehydrate server state here, or pick a snapshot from localStorage
, etc.gqlHttpClient
, gqlWsClient
or both need to be provided.StoreContext.Provider
to make the store available to the rest of the rendering three.window
. This has no practical use, and should be done only in DEV builds. It is a really convenient way to quickly inspect the store, or even fire actions or queries directly from the console of the browser's developer tools. (See this talk for some cool benefits of that)Now, we are ready to write our first React components that use the store! Because the store is a normal MST store, like usual, observer
based components can be used to render the contents of the store.
However, mst-gql also provides the useQuery hook that can be used to track the state of an ongoing query or mutation. It can be used in many different ways (see the details below), but here is a quick example:
import React from "react"
import { observer } from "mobx-react"
import { Error, Loading, Message } from "./"
import { useQuery } from "../models/reactUtils"
export const Home = observer(() => {
const { store, error, loading, data } = useQuery((store) =>
store.queryMessages()
)
if (error) return <Error>{error.message}</Error>
if (loading) return <Loading />
return (
<ul>
{data.messages.map((message) => (
<Message key={message.id} message={message} />
))}
</ul>
)
})
Important: useQuery
should always be used in combination with observer
from the "mobx-react"
or "mobx-react-lite"
package! Without that, the component will not re-render automatically!
The useQuery
hook is imported from the generated reactUtils
, and is bound automatically to the right store context. The first parameter, query
, accepts many different types of arguments, but the most convenient one is to give it a callback that invokes one of the query (or your own) methods on the store. The Query object returned from that action will be used to automatically update the rendering. It will also be typed correctly when used in this form.
The useQuery
hook component returns, among other things, the store
, loading
and data
fields.
If you just need access to the store, the useContext
hook can be used: useContext(StoreContext)
. The StoreContext
can be imported from reactUtils
as well.
Mutations work very similarly to queries. To render a mutation, the useQuery
hook can be used again. Except, this time we start without an initial query
parameter. We only set it once a mutation is started. For example the following component uses a custom toggle
action that wraps a graphQL mutation:
import * as React from "react"
import { observer } from "mobx-react"
import { useQuery } from "../models/reactUtils"
export const Todo = observer(({ todo }) => {
const { setQuery, loading, error } = useQuery()
return (
<li onClick={() => setQuery(todo.toggle())}>
<p className={`${todo.complete ? "strikethrough" : ""}`}>{todo.text}</p>
{error && <span>Failed to update: {error}</span>}
{loading && <span>(updating)</span>}
</li>
)
})
The Todo model used in the above component is defined as follows:
export const TodoModel = TodoModelBase.actions((self) => ({
toggle() {
return self.store.mutateToggleTodo({ id: self.id }, undefined, () => {
self.complete = !self.complete
})
}
}))
There are few things to notice:
toggle
action wraps around the generated mutateToggleTodo
mutation of the base model, giving us a much more convenient client api.mutateToggleTodo
is returned from our action, so that we can pass it (for example) to the setQuery
as done in the previous listing.optimisticUpdate
. This function is executed immediately when the mutation is created, without awaiting it's result. So that the change becomes immediately visible in the UI. However, MST will record the patches. If the mutation fails in the future, any changes made inside this optimisticUpdate
callback will automatically be rolled back by reverse applying the recorded patches!Mutations and queries take as second argument a result selector, which defines which objects we want to receive back from the backend. Our mutateToggleTodo
above leaves it to undefined
, which defaults to querying all the shallow, primitive fields of the object (including __typename
and id
).
However, in the case of toggling a Todo, this is actually overfetching, as we know the text won't change by the mutation. So instead we can provide a selector to indicate that we we are only interested in the complete
property: "__typename id complete"
. Note that we have to include __typename
and id
so that mst-gql knows to which object the result should be applied!
Children can be retrieved as well by specifying them explicitly in the result selector, for example: "__typename id complete assignee { __typename id name }
. Note that for children __typename
and id
(if applicable) should be selected as well!
It is possible to use gql
from the graphql-tag
package. This enables highlighting in some IDEs, and potentially enables static analysis.
However, the recommended way to write the result selectors is to use the query builder that mst-gql will generate for you. This querybuilder is entirely strongly typed, provides auto completion and automatically takes care of __typename
and id
fields. It can be used by passing a function as second argument to a mutation or query. That callback will be invoked with a querybuilder for the type of object that is returned. With the querybuilder, we could write the above mutation as:
export const TodoModel = TodoModelBase.actions((self) => ({
toggle() {
return self.store.mutateToggleTodo({ id: self.id }, (todo) => todo.complete)
}
}))
To select multiple fields, simply keep "dotting", as the query is a fluent interface. For example: user => user.firstname.lastname.avatar
selects 3 fields.
Complex children can be selected by calling the field as function, and provide a callback to that field function (which in turn is again a query builder for the appropriate type). So the following example selector selects the timestamp
and text
of a message. The name
and avatar
inside the user
property, and finally also the likes
properties. For the likes
no further subselector was specified, which means that only __typename
and id
will be retrieved.
// prettier-ignore
msg => msg
.timestamp
.text
.user(user => user.name.avatar)
.likes()
.toString()
To create reusable query fragments, instead the following syntax can be used:
import { selectFromMessage } from "./MessageModel.base"
// prettier-ignore
export const MESSAGE_FRAGMENT = selectFromMessage()
.timestamp
.text
.user(user => user.name.avatar)
.likes()
.toString()
You can customize all of the defined mst types: RootStore
, ModelBase
, and every XXXModel
.
However, some files (including but not limited to .base
files) should not be touched, as they probably need to be scaffolded again in the future.
Thanks to how MST models compose, this means that you can introduce as many additional views
, actions
and props
as you want to your models, by chaining more calls unto the model definitions. Those actions will often wrap around the generated methods, setting some predefined parameters, or composing the queries into bigger operations.
Example of a generated model, that introduces a toggle
action that wraps around one of the generated mutations:
// src/models/TodoModel.js
import { TodoModelBase } from "./TodoModel.base"
export const TodoModel = TodoModelBase.actions((self) => ({
toggle() {
return self.store.mutateToggleTodo({ id: self.id })
}
}))
That's it for the introduction! For the many different ways in which the above can applied in practice, check out the examples
There is an exported function called getDataFromTree
which you can use to preload all queries, note that you must set ssr: true
as an option in order for this to work
async function preload() {
const client = RootStore.create(undefined, {
gqlHttpClient: createHttpClient("http://localhost:4000/graphql"),
ssr: true
})
const html = await getDataFromTree(<App client={client} />, client)
const initalState = getSnapshot(client)
return [html, initalState]
}
Because you can control what data is fetched for a model in graphql and mst-gql it is possible for a model to have some fields that have not yet been fetched from the server. This can complicate things when we're talking about a field that can also be "empty". To help with this a field in mst-gql will be undefined
when it has not been fetched from the server and, following graphql conventions, will be null
if the field has been fetched but is in fact empty.
πΏ In-depth store semantics πΏ
mst-gql generates model types for every object type in your graphql definition. (Except for those excluded using the excludes
flag). For any query or mutation that is executed by the store, the returned data will be automatically, and recursively parsed into those generated MST models. This means that for any query, you get a 'rich' object back. Finding the right model type is done based on the GraphQL meta field __typename
, so make sure to include it in your graphql queries!
The philosophy behind MST / mst-gql is that every 'business concept' should exist only once in the client state, so that there is only one source of truth for every message, usage, order, product etc. that you are holding in memory. To achieve this, it is recommended that every uniquely identifyable concept in your application, does have an id
field of the graphQL ID
type. By default, any object types for which this is true, is considered to be a "root type".
Root types have few features:
types.reference
will be used to establish the reference. This means you can use deep fields in the UI, like message.author.name
, despite the fact that this data is stored normalized in the store.GraphQL has no explicit distinction between compositional and associative relationships between data types. In general, references between graphQL objects are dealt with as follows.
types.reference
is used, e.g.: author: types.reference(UserModel)
comments: types.array(CommentModel)
types.frozen
is used, and the data as returned from the query is stored literally.GraphQL makes it possible to query a subset of the fields of any object. The upside of this is that data traffic can be minimized. The downside is that it cannot be guaranteed that any object is loaded in its 'complete' state. It means that fields might be missing in the client state, even though are defined as being mandatory in the original graphQL object type! To verify which keys are loaded, all models expose the hasLoaded(fieldName:string):boolean
view, which keeps track of which fields were received at least once from the back-end.
As described above, (root) model instances are kept alive automatically. Beyond that, mst-gql also provides caching on the network level, based on the query string and variables, following the policies of the apollo and urql graphQL clients. The following fetch policies are supported:
The default policy is cache-and-network
. This is different from other graphQL clients. But since mst-gql leverages the MobX reactivity system, this means that, possibly stale, results are shown on screen immediately if a response is in cache, and that the screen will automatically update as soon as a new server response arrives.
The query cache is actually stored in MST as well, and can be accessed through store.__queryCache
.
Since the query cache is stored in the store, this means that mixins like useLocalStore
will serialize them. This will help significantly in building offline-first applications.
π¦ API π¦
The mst-gql
command currently accepts the following arguments:
--format ts|js|mjs
The type of files that need to be generated (default: js
)
--outDir <dir>
The output directory of the generated files (default: src/models
)
--excludes 'type1,type2,typeN'
The types that should be omitted during generation, as we are not interested in for this app.
--roots 'type1,type2,typeN'
The types that should be used as (root types)[#root-types]
--modelsOnly
Generates only models, but no queries or graphQL capabilities. This is great for backend usage, or if you want to create your own root store
--noReact
doesn't generate the React related utilities
--force
When set, exiting files will always be overridden. This will drop all customizations of model classes!
--dontRenameModels
By default generates model names from graphql schema types that are idiomatic Javascript/Typescript names, ie. type names will be PascalCased and root collection names camelCased. With --dontRenameModels
the original names - as provided by the graphql schema - will be used for generating models.
--useIdentifierNumber
Specifies the use of identifierNumber
instead of identifier
as the mst type for the generated models IDs. This requires your models to use numbers as their identifiers. See the mobx-state-tree for more information.
--fieldOverrides id:uuid:idenfitier,*:ID:identifierNumber
Overrides default MST types for matching GraphQL names and types. The format is gqlFieldName:gqlFieldType:mstType
. Supports full or partial wildcards for fieldNames, and full wildcards for fieldTypes. Case Sensitive. If multiple matches occur, the match with the least amount of wildcards will be used, followd by the order specified in the arg list if there are still multiple matches. Some examples:
*_id:*:string
- Matches any GQL type with the field name *_id
(like user_id
), and uses the MST type types.string
*:ID:identifierNumber
- Matches any GQL type with any field name and the ID
type, and uses the MST type types.identifierNumber
User.user_id:ID:number
- Matches the user_id
field on User
with the GQL type ID
, and uses the MST type types.number
Specifying this argument additionaly allows the use of multiple IDs on a type. The best matched ID will be used, setting the other IDs to types.frozen()
Book.author_id:ID:identifierNumber
- Matches the author_id
field on Book
with the GQL type ID
and uses the MST type types.identifierNumber
, and sets any other GQL IDs on Book
to types.frozen()
For TS users, input types and query arguments will only be modified for fieldOverrides with a wildcard for gqlFieldName
(*:uuid:identifier
). An override like *_id:uuid:identifier
will not affect input types.
The primary use case for this feature is for GQL Servers that don't always do what you want. For example, Hasura does not generate GQL ID types for UUID fields, which causes issues when trying to reference associate types in MST. To overcome this, simply specify --fieldOverrides *:UUID:identifier
*.timestamp:*:DateScalar:../scalars
- Matches any GQL type with the field name timestamp
, and uses the MST type DateScalar
imported from file ../scalars
. Usually used for graphql custom scalar
support with MST type.custom
source
The last argument is the location at which to find the graphQL definitions. This can be
http://host/graphql
schema.graphql
schema.json
mst-gql
also supports cosmiconfig as an alternative to using cli arguments.
A sample config can be found in Example 2.
The generated RootStore exposes the following members:
query(query, variables, options): Query
Makes a graphQL request to the backend. The result of the query is by default automatically normalized to model instances as described above. This method is also used by all the automatically scaffolded queries.
query
parameter can be a string, or a graphql-tag
based query.fetchPolicy: "cache-and-network"
and noSsr: false
Query
that can be inspected to keep track of the request progress.Be sure to at least select __typename
and id
in the result selector, so that mst-gql can normalize the data.
mutate(query, variables, optimisticUpdate): Query
Similar to query
, but used for mutations. If an optimisticUpdate
thunk is passed in, that function will be immediately executed so that you can optimistically update the model. However, the patches that are generated by modifying the tree will be stored, so that, if the mutation ultimately fails, the changes can be reverted. See the Optimistic updates section for more details.
subscribe(query, variables, onData): () => void
Similar to query
, but sets up an websocket based subscription. The gqlWsClient
needs to be set during the store creation to make this possible. onData
can be provided as callback for when new data arrives.
Example initalization:
import { SubscriptionClient } from "subscriptions-transport-ws"
build a websocket client:
// see: https://www.npmjs.com/package/subscriptions-transport-ws#hybrid-websocket-transport
const gqlWsClient = new SubscriptionClient(constants.graphQlWsUri, {
reconnect: true,
connectionParams: {
headers: { authorization: `Bearer ${tokenWithRoles}` }
}
})
add the ws client when creating the store:
// see: https://github.com/mobxjs/mst-gql/blob/master/src/MSTGQLStore.ts#L42-L43
const store = RootStore.create(undefined, {
gqlHttpClient,
gqlWsClient
})
When using server side rendered tools like gatsby/next/nuxt it is necessary to prevent using subscriptions server side. An error will occur because the server is missing a websocket implementation. See code example for gatsby.
Based on the queries, mutations and subscriptions defined at the endpoint, mst-gql automatically scaffolds methods for those onto the base root store.
This is very convenient, as you might not need to write any graphQL queries by hand yourself in your application. Beyond that, the queries now become strongly typed. When using TypeScript, both the variables
and the return type of the query will be correct.
An example signature of a generated query method is:
queryPokemons(variables: { first: number }, resultSelector = pokemonModelPrimitives, options: QueryOptions = {}): Query<PokemonModelType[]>
All parameters of this query are typically optional (unless some of the variables are requires, like in the above example).
The result selector defines which fields should fetched from the backend. By default mst-gql will fetch __typename
, ID
and all primitive fields defined in the model, but full free to override this to make more fine tuned queries! For better reuse, consider doing this in a new action on the appropiate model. For example a query to fetch all comments and likes for a message could look like:
import { MessageBaseModel } from "./MessageModel.base"
const MessageModel = MessageBaseModel.actions((self) => ({
queryCommentsAndLikes(): Query<MessageModelType> {
return store.queryMessage(
{ id: self.id },
`
id
__typename
comments {
id
__typename
text
likes {
__typename
author
}
}
`
)
}
}))
RootStoreType
can be used for all places in TypeScript where you need the instance type of the RootStore.rawRequest(query: string, variables: any): Promise
. Makes a direct, raw, uncached, request to the graphQL server. Should typically not be needed.__queryCache
. See Query caching. Should typically not be needed.merge(data)
. Merges a raw graphQL response into the store, and returns a new tree with model instances. See In-depth store semantics. Should typically not be needed.The generated models provide storage place for data returned from GraphQL, as explained above. Beyond that, it is the place where you enrich the models, with client-side only state, actions, derived views, etc.
For convenience, each model does provide two convenience views:
hasLoaded(field)
returns true
if data for the specified field was received from the serverstore
: a strongly typed back-reference to the RootStore that loaded this modelBeyond that, the the following top-level exports are exposed from each model file:
xxxPrimitives
: A simple string that provides a ready-to-use selector for graphQL queries, that selects all the primitive fields. For example: "__typename id title text done
xxxModelType
: A TypeScript type definition that can be used in the application if you need to refer to the instance type of this specific modelselectFromXXX()
: Returns a strongly typed querybuilder that can be used to write graphql result selector fragments more easily. Don't forget to call toString()
in the end!export interface QueryOptions {
fetchPolicy?: FetchPolicy
noSsr?: boolean
}
See Query caching for more details on fetchPolicy
. Default: "cache-and-network"
The noSsr
field indicates whether the query should be executed during Server Side Rendering, or skipped there and only executed once the page is loaded in the browser. Default: false
createHttpClient(url: string, options: HttpClientOptions = {})
Creates a http client for transportation purposes. For documentation of the options, see: https://github.com/prisma/graphql-request
import { createHttpClient } from "mst-gql"
import { RootStore } from "./models/RootStore"
const gqlHttpClient = createHttpClient("http://localhost:4000/graphql")
const rootStore = RootStore.create(undefined, {
gqlHttpClient
})
Creating a websocket client can be done by using the subscriptions-transport-ws
package, and passing a client to the store as gqlWsClient
environment variable:
import { SubscriptionClient } from "subscriptions-transport-ws"
import { RootStore } from "./models/RootStore"
const gqlWsClient = new SubscriptionClient("ws://localhost:4001/graphql", {
reconnect: true
})
const rootStore = RootStore.create(undefined, {
gqlWsClient
})
Query objects capture the state of a specific query. These objects are returned from all query
and mutate
actions. Query objects are fully reactive, which means that if you use them in observer
component, or any other reactive MobX mechanism, such as autorun
or when
, they can be tracked.
Beyond that, query objects are also then-able, which means that you can use them as a promise. The complete type of a query object is defined as follows:
class Query<T> implements PromiseLike<T> {
// Whether the Query is currently fetching data from the back-end
loading: boolean
// The data that was fetched for this query.
// Note that data might be available, even when the query object is still loading,
// depending on the fetchPolicy
data: T | undefined
// If any error occurred, it is stored here
error: any
// Forces the query to re-executed and make a new roundtrip to the back-end.
// The returned promise settles once that request is completed
refetch = (): Promise<T> => {
// case takes an object that should have the methods `error`, `loading` and `data`.
// It immediately calls the appropriate handler based on the current query status.
// Great tool to use in a reactive context, comparable with mobx-utils.fromPromise
case<R>(handlers: {
loading(): R
error(error: any): R
data(data: T): R
}): R
// Returns the promise for the currently ongoing request
// (note that for example `refetch` will cause a new promise to become the current promise)
currentPromise()
// A short-cut to the .then handler of the current promise
then(onResolve, onError)
In the generated reactUtils
you will find the StoreContext
, which is a pre-initialized React context that can be used to distribute the RootStore through your application. It's primary benefit is that it is strongly typed, and that Query
components will automatically pick up the store distributed by this context.
The useQuery
hook, as found in reactUtils
can be used to create and render queries or mutations in React.
The useQuery
hook should always be used inside an observer
(provided by the mobx-react
or mobx-react-lite
package) based component!
It accepts zero, one or 2 arguments:
query
, the query to execute. This parameter can take the following forms:setQuery
, for example when a mutation should be tracked.query messages { allMessages { __typename id message date }}
graphql-tag
based template stringQuery
objectstore
, and should return a Query
object. The callback will be invoked when the component is rendered for the first time, and is a great way to delegate the query logic itself to the store. This is the recommend approach. For example store => store.queryAllMessages()
options
, an object which can specify further options, such asvariables
: The variables to be substituted into the graphQL query (only used if the query is specified as graphql tag or string!)fetchPolicy
: See fetch policynoSsr
: See the noSsr option of queriesstore
: This can be used to customize which store should be used. This can be pretty convenient for testing, as it means that no Provider needs to be used.The query component takes a render callback, that is rendered based on the current status of the Query
objects that is created based on the query
property. The callback is also automatically wrapped in MobX-reacts' observer
HoC.
The hook returns one object, with the following properties:
loading
error
data
store
query
- the current Query
objectsetQuery
- replaces the current query being rendered. This is particularly useful for mutations or loading more dataThe useQuery
hook is strongly typed; if everything is setup correctly, the type of data
should be inferred correctly when using TypeScript.
For examples, see the sections Loading and rendering your first data and Mutations.
localStorageMixin
The localStorageMixin
can be used to automatically save the full state of the RootStore
. By default the store is saved after every change, but throttle to be saved once per 5 seconds. (The reason for the throttling is that, although snapshotting is cheap, serializing a a snapshot to a string is expensive). If you only want to persist parts of the store you can use the filter
option to filter which keys that should be stored.
Options:
storage
(the storage object to use. Defaults to window.localStorage
)throttle
(in milliseconds)storageKey
(the key to be used to store in the local storage).filter
(an optional array of string keys that determines which data that will be stored to local storage)Example:
models/RootStore.js
const RootStore = RootStoreBase.extend(
localStorageMixin({
throttle: 1000,
storageKey: "appFluff"
filter: ['todos', 'key.subkey']
})
)
To use this mixin with react-native you can pass AsyncStorage
to the mixin using the storage
option:
Example:
models/RootStore.js
import AsyncStorage from "@react-native-community/async-storage"
const RootStore = RootStoreBase.extend(
localStorageMixin({
storage: AsyncStorage,
throttle: 1000,
storageKey: "appFluff"
})
)
π Examples π
This project contains usage exampels in the examples
directory showcasing various ways mst-gql
can be used.
yarn
in the root directory of this project before running an example.README.md
within the example folder.The 1-getting-started
example is a very trivial project, that shows how to use mst-gql
together with TypeScript and React. Features:
TodoModel
by introduce an toggle
action, which uses an optimistic update.The 2-scaffolding
examples generates code for a non trivial projects and runs it through the compiler.
3-twitter-clone
Is the most interesting example project. Highlights:
MessageModel.user
and MessageModel.likes
.MessageModel.replyTo
is field that refers to a MessageModel
, so that a tweet tree can be expressed.MessageModel.isLikedByMe
introduce a client-only derived view.RootStore
has a property sortedMessages
to store local state.4-apollo-tutorial
is a port of the apollo full-stack tutorial. Note that the example doesn't use apollo anymore. See it's readme for specific install instructions.
The examples has a lot of similarities with example 3, and also has
localStorageMixin
so that the app can start without network requests5-nextjs
an example using next. Highlights:
Tips & tricks
... you might have forgotten to include __typename
or id
in the result selector of your string or graphql-tag based queries.
If the view is stuck in loading state, but you can see in the network requests that you did get a proper response, you probably forget to include observer
on the component that renders the query
If you are using prettier, it is strongly recommended to make sure that the files that are generated over and over again, are not formatted, by setting up a .prettierignore
file.
src/models/index.*
src/models/reactUtils.*
src/models/*.base.*
src/models/*Enum.*
Or, alternatively, if you want to properly format the generated files based on your standards, make sure that you always run prettier on those files after scaffolding.
In general we recommend to keep the components dumb, and create utility functions in the store or models to perform queries needed for a certain UI component. This encourages reuse of queries between components. Furthermore, it makes testing easier, as it will be possible to test your query methods directly, without depending on rendering components. As is done for example here
...are best modelled using separate models, or by introducing additional properties and actions to keep track of paging, offset, search filters, etcetera. This is done for example in the twitter example and the apollo example
Mutation should select the fields they change in the result selection
It is possible to scaffold with the --modelsOnly
flag. This generates a RootStore and the model classes, but no code for the queries or React, and hence it is environment and transportation independent. Use this option if you want to use models on the server, or on the client in combination with another graphql client. Use store.merge(data)
to merge in query results you get from your graphql client, and get back instantiated model objects.
It is quite easy to stub away the backend and transportation layer, by providing a custom client to the rootStore, as is done here.
Discuss this project on spectrum
Author: Mobxjs
Source Code: https://github.com/mobxjs/mst-gql
License: MIT license
1659620707
Build Typed GraphQL Queries in TypeScript. A better TypeScript + GraphQL experience.
Install
npm install --save typed-graphqlify
Or if you use Yarn:
yarn add typed-graphqlify
Motivation
We all know that GraphQL is so great and solves many problems that we have with REST APIs, like overfetching and underfetching. But developing a GraphQL Client in TypeScript is sometimes a bit of pain. Why? Let's take a look at the example we usually have to make.
When we use GraphQL library such as Apollo, We have to define a query and its interface like this:
interface GetUserQueryData {
getUser: {
id: number
name: string
bankAccount: {
id: number
branch?: string
}
}
}
const query = graphql(gql`
query getUser {
user {
id
name
bankAccount {
id
branch
}
}
}
`)
apolloClient.query<GetUserQueryData>(query).then(data => ...)
This is so painful.
The biggest problem is the redundancy in our codebase, which makes it difficult to keep things in sync. To add a new field to our entity, we have to care about both GraphQL and TypeScript interface. And type checking does not work if we do something wrong.
typed-graphqlify comes in to address this issues, based on experience from over a dozen months of developing with GraphQL APIs in TypeScript. The main idea is to have only one source of truth by defining the schema using GraphQL-like object and a bit of helper class. Additional features including graphql-tag, or Fragment can be implemented by other tools like Apollo.
How to use
Define GraphQL-like JS Object:
import { query, types, alias } from 'typed-graphqlify'
const getUserQuery = query('GetUser', {
user: {
id: types.number,
name: types.string,
bankAccount: {
id: types.number,
branch: types.optional.string,
},
},
})
Note that we use our types
helper to define types in the result.
The getUserQuery
has toString()
method which converts the JS object into GraphQL string:
console.log(getUserQuery.toString())
// =>
// query getUser {
// user {
// id
// name
// bankAccount {
// id
// branch
// }
// }
// }
Finally, execute the GraphQL and type its result:
import { executeGraphql } from 'some-graphql-request-library'
// We would like to type this!
const data: typeof getUserQuery.data = await executeGraphql(getUserQuery.toString())
// As we cast `data` to `typeof getUserQuery.data`,
// Now, `data` type looks like this:
// interface result {
// user: {
// id: number
// name: string
// bankAccount: {
// id: number
// branch?: string
// }
// }
// }
Features
Currently typed-graphqlify
can convert these GraphQL features:
number
string
boolean
number | undefined
Examples
query getUser {
user {
id
name
isActive
}
}
import { query, types } from 'typed-graphqlify'
query('getUser', {
user: {
id: types.number,
name: types.string,
isActive: types.boolean,
},
})
Or without query name
query {
user {
id
name
isActive
}
}
import { query, types } from 'typed-graphqlify'
query({
user: {
id: types.number,
name: types.string,
isActive: types.boolean,
},
})
Use mutation
. Note that you should use alias
to remove arguments.
Note: When Template Literal Type
is supported officially, we don't have to write alias
. See https://github.com/acro5piano/typed-graphqlify/issues/158
mutation updateUserMutation($input: UserInput!) {
updateUser: updateUser(input: $input) {
id
name
}
}
import { mutation, alias } from 'typed-graphqlify'
mutation('updateUserMutation($input: UserInput!)', {
[alias('updateUser', 'updateUser(input: $input)')]: {
id: types.number,
name: types.string,
},
})
Or, you can also use params
helper which is useful for inline arguments.
import { mutation, params, rawString } from 'typed-graphqlify'
mutation('updateUserMutation', {
updateUser: params(
{
input: {
name: rawString('Ben'),
slug: rawString('/ben'),
},
},
{
id: types.number,
name: types.string,
},
),
})
Write nested objects just like GraphQL.
query getUser {
user {
id
name
parent {
id
name
grandParent {
id
name
children {
id
name
}
}
}
}
}
import { query, types } from 'typed-graphqlify'
query('getUser', {
user: {
id: types.number,
name: types.string,
parent: {
id: types.number,
name: types.string,
grandParent: {
id: types.number,
name: types.string,
children: {
id: types.number,
name: types.string,
},
},
},
},
})
Just add array to your query. This does not change the result, but TypeScript will be aware the field is an array.
query getUsers {
users: users(status: "active") {
id
name
}
}
import { alias, query, types } from 'typed-graphqlify'
query('getUsers', {
[alias('users', 'users(status: "active")')]: [{
id: types.number,
name: types.string,
)],
})
Add types.optional
or optional
helper method to define optional field.
import { optional, query, types } from 'typed-graphqlify'
query('getUser', {
user: {
id: types.number,
name: types.optional.string, // <-- user.name is `string | undefined`
bankAccount: optional({ // <-- user.bankAccount is `{ id: number } | undefined`
id: types.number,
}),
},
}
Use types.constant
method to define constant field.
query getUser {
user {
id
name
__typename # <-- Always `User`
}
}
import { query, types } from 'typed-graphqlify'
query('getUser', {
user: {
id: types.number,
name: types.string,
__typename: types.constant('User'),
},
})
Use types.oneOf
method to define Enum field. It accepts an instance of Array
, Object
and Enum
.
query getUser {
user {
id
name
type # <-- `STUDENT` or `TEACHER`
}
}
import { query, types } from 'typed-graphqlify'
const userType = ['STUDENT', 'TEACHER'] as const
query('getUser', {
user: {
id: types.number,
name: types.string,
type: types.oneOf(userType),
},
})
import { query, types } from 'typed-graphqlify'
const userType = {
STUDENT: 'STUDENT',
TEACHER: 'TEACHER',
}
query('getUser', {
user: {
id: types.number,
name: types.string,
type: types.oneOf(userType),
},
})
You can also use enum
:
Deprecated: Don't use enum, use array or plain object to define enum if possible. typed-graphqlify can't guarantee inferred type is correct.
import { query, types } from 'typed-graphqlify'
enum UserType {
'STUDENT',
'TEACHER',
}
query('getUser', {
user: {
id: types.number,
name: types.string,
type: types.oneOf(UserType),
},
})
Use params
to define field with arguments.
query getUser {
user {
id
createdAt(format: "d.m.Y")
}
}
import { query, types, params, rawString } from 'typed-graphqlify'
query('getUser', {
user: {
id: types.number,
createdAt: params({ format: rawString('d.m.Y') }, types.string),
},
})
Add other queries at the same level of the other query.
query getFatherAndMother {
father {
id
name
}
mother {
id
name
}
}
import { query, types } from 'typed-graphqlify'
query('getFatherAndMother', {
father: {
id: types.number,
name: types.string,
},
mother: {
id: types.number,
name: types.number,
},
})
Query alias is implemented via a dynamic property.
query getMaleUser {
maleUser: user {
id
name
}
}
import { alias, query, types } from 'typed-graphqlify'
query('getMaleUser', {
[alias('maleUser', 'user')]: {
id: types.number,
name: types.string,
},
}
Use the fragment
helper to create GraphQL Fragment, and spread the result into places the fragment is used.
query {
user: user(id: 1) {
...userFragment
}
maleUsers: users(sex: MALE) {
...userFragment
}
}
fragment userFragment on User {
id
name
bankAccount {
...bankAccountFragment
}
}
fragment bankAccountFragment on BankAccount {
id
branch
}
import { alias, fragment, query } from 'typed-graphqlify'
const bankAccountFragment = fragment('bankAccountFragment', 'BankAccount', {
id: types.number,
branch: types.string,
})
const userFragment = fragment('userFragment', 'User', {
id: types.number,
name: types.string,
bankAccount: {
...bankAccountFragment,
},
})
query({
[alias('user', 'user(id: 1)')], {
...userFragment,
},
[alias('maleUsers', 'users(sex: MALE)')], {
...userFragment,
},
}
Use on
helper to write inline fragments.
query getHeroForEpisode {
hero {
id
... on Droid {
primaryFunction
}
... on Human {
height
}
}
}
import { on, query, types } from 'typed-graphqlify'
query('getHeroForEpisode', {
hero: {
id: types.number,
...on('Droid', {
primaryFunction: types.string,
}),
...on('Human', {
height: types.number,
}),
},
})
If you are using a discriminated union pattern, then you can use the onUnion
helper, which will automatically generate the union type for you:
query getHeroForEpisode {
hero {
id
... on Droid {
kind
primaryFunction
}
... on Human {
kind
height
}
}
}
import { onUnion, query, types } from 'typed-graphqlify'
query('getHeroForEpisode', {
hero: {
id: types.number,
...onUnion({
Droid: {
kind: types.constant('Droid'),
primaryFunction: types.string,
},
Human: {
kind: types.constant('Human'),
height: types.number,
},
}),
},
})
This function will return a type of A | B
, meaning that you can use the following logic to differentiate between the 2 types:
const droidOrHuman = queryResult.hero
if (droidOrHuman.kind === 'Droid') {
const droid = droidOrHuman
// ... handle droid
} else if (droidOrHument.kind === 'Human') {
const human = droidOrHuman
// ... handle human
}
Directive is not supported, but you can use alias
to render it.
query {
myState: myState @client
}
import { alias, query } from 'typed-graphqlify'
query({
[alias('myState', 'myState @client')]: types.string,
})
See more examples at src/__tests__/index.test.ts
Usage with React Native
This library uses Symbol
and Map
, meaning that if you are targeting ES5 and lower, you will need to polyfill both of them.
So, you may need to import babel-polyfill
in App.tsx
.
import 'babel-polyfill'
import * as React from 'react'
import { View, Text } from 'react-native'
import { query, types } from 'typed-graphqlify'
const queryString = query({
getUser: {
user: {
id: types.number,
},
},
})
export class App extends React.Component<{}> {
render() {
return (
<View>
<Text>{queryString}</Text>
</View>
)
}
}
See: https://github.com/facebook/react-native/issues/18932
Why not use apollo client:codegen
?
There are some GraphQL -> TypeScript convertion tools. The most famous one is Apollo codegen:
https://github.com/apollographql/apollo-tooling#apollo-clientcodegen-output
In this section, we will go over why typed-graphqlify
is a good alternative.
Disclaimer: I am not a heavy user of Apollo codegen, so the following points could be wrong. And I totally don't mean disrespect Apollo codegen.
Apollo codegen is a great tool. In addition to generating query interfaces, it does a lot of tasks including downloading schemas, schema validation, fragment spreading, etc.
However, great usability is the tradeoff of complexity.
There are some issues to generate interfaces with Apollo codegen.
I (and maybe everyone) don't know the exact reasons, but Apollo's codebase is too large to find out what the problem is.
On the other hand, typed-graphqlify
is as simple as possible by design, and the logic is quite easy. If some issues happen, we can fix them easily.
Currently Apollo codegen cannot handle multiple schemas.
Although I know this is a kind of edge case, but if we have the same type name on different schemas, which schema is used?
Some graphql frameworks, such as laravel-graphql, cannot print schema as far as I know. I agree that we should avoid to use such frameworks, but there must be situations that we cannot get graphql schema for some reasons.
It is useful to write GraphQL programmatically, although that is an edge case.
Imagine AWS management console:
If you build something like that with GraphQL, you have to build GraphQL dynamically and programmatically.
typed-graphqlify works for such cases without losing type information.
Contributing
To get started with a development installation of the typed-graphqlify, follow the instructions at our Contribution Guide.
Thanks
Inspired by
Author: Acro5piano
Source Code: https://github.com/acro5piano/typed-graphqlify
License: MIT license