1678532760
next-boost
adds a cache layer to your SSR (Server-Side Rendering) applications. It was built originally for Next.js
and should work with any node.js http.Server
based application.
next-boost
achieves great performance by rendering webpages on worker_threads
while serving the cached on the main thread.
If you are familiar with Next.js
, next-boost
can be considered as an implementation of Incremental Static Regeneration which works with getServerSideProps
. And it's not meant to be used with getStaticProps
, in which Next.js will do the cache for you.
$ yarn add @next-boost/next-boost
$ yarn add @next-boost/redis-cache # using load-balancer and cluster
$ yarn add @next-boost/hybrid-disk-cache # simple site with disk cache
next start
worker_threads
for SSR.next-boost.redis.js
for sample config.next-boost.hdc.js
for sample confignext-boost
cli with Next.jsAfter install the package, just change the start script from next start
to next-boost
. All next start
's command line arguments, like -p
for specifing the port, are compatible.
"scripts": {
...
"start": "next-boost", // previously `next start`
...
},
There's an example under examples/nodejs
, which works with a plain http.Server
.
To use it with express.js
and next.js
, please check examples/with-express
.
By using worker_threads
, the CPU-heavy SSR rendering will not blocking the main process from serving the cache.
Here are the comparision of using ApacheBench
on a blog post fetched from database. HTML prerendered and the db operation takes around 10~20ms. The page takes around 200ms for Next.js to render.
$ /usr/local/bin/ab -n 200 -c 8 http://127.0.0.1:3000/blog/posts/2020/3/postname
Not a scientific benchmark, but the improvements are visibly huge.
with next start
(data fetched with getServerSideProps
):
Document Length: 76424 bytes
Concurrency Level: 8
Time taken for tests: 41.855 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 15325600 bytes
HTML transferred: 15284800 bytes
Requests per second: 4.78 [#/sec] (mean)
Time per request: 1674.185 [ms] (mean)
Time per request: 209.273 [ms] (mean, across all concurrent requests)
Transfer rate: 357.58 [Kbytes/sec] received
with the drop-in next-boost
cli:
Document Length: 78557 bytes
Concurrency Level: 8
Time taken for tests: 0.149 seconds
Complete requests: 200
Failed requests: 0
Total transferred: 15747600 bytes
HTML transferred: 15711400 bytes
Requests per second: 1340.48 [#/sec] (mean)
Time per request: 5.968 [ms] (mean)
Time per request: 0.746 [ms] (mean, across all concurrent requests)
Transfer rate: 103073.16 [Kbytes/sec] received
It even outperforms next.js's static generated page (getStaticProps
), handling 2~2.5x requests per seconds in my environment.
next-boost
implements a server-side cache in the manner of stale-while-revalidate. When an expired (stale
) page is accessed, the cache will be served and at the same time, a background process will fetch the latest version (revalidate
) of that page and save it to the cache.
The following config will cache URIs matching ^/blog.*
. Only pages match rules
will be handled by next-boost
and there's no exclude
rules.
module.exports = {
rules: [{ regex: '^/blog.*', ttl: 300 }],
}
There are 2 parameters to control the behavior of the cache:
ttl (time-to-live)
: After ttl
seconds, the cache will be revalidated. And a cached page's ttl
will be updated when a page is revalidated.tbd (time-before-deletion)
: When a page is not hit again in ttl + tbd
seconds, it will be completely remove from cache.Above: only caching pages with URL start with /blog
.
By sending a GET with header x-next-boost:update
to the URL, the cache will be revalidated. And if the page doesn't exists anymore, the cache will be deleted.
$ curl -H x-next-boost:update https://the_server_name.com/path_a
If you want to delete mutiple pages at once, you can run SQL on the cache directly:
sqlite3 /cache_path/cache.db "update cache set ttl=0 where key like '%/url/a%';"
This will force all urls containing /url/a
to be revalidated when next time accessed.
Deleting cache_path
will remove all the caches.
By default, each page with different URLs will be cached separately. But in some cases you would like, /path_a?utm_source=twitter
to be served with the same contents of /path_a
. paramFilter
is for filtering the query parameters.
// in .next-boost.js
{
...
paramFilter: (p) => p !== 'utm_source'
}
By default, the URL will be used as the key for cached pages. If you want to server pages from different domains or by different user-agent, you can use this function to custom the cache key.
Notes:
string
, your server will crash.// in .next-boost.js
{
...
cacheKey: (req) => (req.headers.host || '') + ':' + req.url
}
Alternatively you can provide a function instead of array inside your config.
// in .next-boost.js
{
...
rules: (req) => {
if (req.url.startsWith('/blog')) {
return 300
}
}
}
Function should return valid ttl
for the request. If the function returns 0
or falsy
value the request will not be cached.
The power that comes from this method is that you can decide if the request is cached or not more dynamically.
For example you can automatically ignore all request from authenticated users based on the header:
// in .next-boost.js
{
...
rules: (req) => {
if (req.headers.authorization) {
return false
}
return 10 // cache all other requests for 10 seconds
}
}
You can also get more complex rules done more easily then through regex. For example you wish different ttl for each of the pagination pages.
// in .next-boost.js
{
...
rules: (req) => {
const [, p1] = url.split('?', 2)
const params = new URLSearchParams(p1)
return {
1: 5000,
2: 4000,
3: 3000,
4: 2000
}[params.get('page')] || 1000
}
}
While you would need to write complex regex rule or potentially more rules it is easy to do it through JS logic.
In the end if you prefer writting regex but wish to leverage JS logic you can always regex match inside a rules handler.
If available, .next-boost.js
at project root will be used. If you use next-boost programmatically, the filename can be changed in options you passed to CachedHandler
.
tips: If you are using next-boost
cli with Next.js, you may want to use the config file.
And here's an example .next-boost.sample.js
in the repo.
interface HandlerConfig {
filename?: string
quiet?: boolean
cache?: {
ttl?: number
tbd?: number
path?: string
}
rules?: Array<URLCacheRule> | URLCacheRuleResolver
paramFilter?: ParamFilter
cacheKey?: CacheKeyBuilder
}
interface URLCacheRule {
regex: string
ttl: number
}
type URLCacheRuleResolver = (req: IncomingMessage) => number
type ParamFilter = (param: string) => boolean
type CacheKeyBuilder = (req: IncomingMessage) => string
Logging is enabled by default. If you use next-boost
programmatically, you can disable logs by passing the quiet
boolean flag as an option to CachedHandler
.
...
const cached = await CachedHandler(args, { quiet: true });
...
There's also a --quiet
flag if you are using the command line.
next-boost
is limited. Until the url is hit on every backend server, it can still miss the cache. Use reverse proxy with cache support (nginx, varnish etc) for that.GET
and HEAD
requests only.worker_threads
is used and it is a node.js 12+ feature.next-boost
works as an in-place replacement for next start
by using Next.js's custom server feature.
On the linked page above, you can see the following notice:
Before deciding to use a custom server please keep in mind that it should only be used when the integrated router of Next.js can't meet your app requirements. A custom server will remove important performance optimizations, like serverless functions and Automatic Static Optimization.
next-boost is meant to be used on cloud VPS or containers, so serverless function is not an issue here. As to Automatic Static Optimization
, because we are not doing any app.render
here, it still works, as perfect as always.
Here's the article about when not to use SQLite. And for next-boost's main purpuse: super faster SSR on low-cost VPSs, as far as I know, it is the best choice.
Author: Next-boost
Source Code: https://github.com/next-boost/next-boost
License: MIT
1677123660
Production-ready, lightweight fully customizable React carousel component that rocks supports multiple items and SSR(Server-side rendering).
We are on a very excited journey towards version 3.0 of this component which will be rewritten in hooks/context completely. It means smaller bundle size, performance improvement and easier customization of the component and so many more benefits.
It would mean so much if you could provide help towards the further development of this project as we do this open source work in our own free time especially during this covid-19 crisis.
If you are using this component seriously, please donate or talk to your manager as this project increases your income too. It will help us make releases, fix bugs, fulfill new feature requests faster and better.
Become a backer/sponsor to get your logo/image on our README on Github with a link to your site.
Big thanks to BrowserStack for letting the maintainers use their service to debug browser issues.
Bundle-size. 2.5kB
Documentation is here.
Demo for the SSR https://react-multi-carousel.now.sh/
Try to disable JavaScript to test if it renders on the server-side.
Codes for SSR at github.
Codes for the documentation at github.
$ npm install react-multi-carousel --save
import Carousel from 'react-multi-carousel';
import 'react-multi-carousel/lib/styles.css';
Codes for SSR at github.
Here is a lighter version of the library for detecting the user's device type alternative
You can choose to only bundle it on the server-side.
import Carousel from "react-multi-carousel";
import "react-multi-carousel/lib/styles.css";
const responsive = {
superLargeDesktop: {
// the naming can be any, depends on you.
breakpoint: { max: 4000, min: 3000 },
items: 5
},
desktop: {
breakpoint: { max: 3000, min: 1024 },
items: 3
},
tablet: {
breakpoint: { max: 1024, min: 464 },
items: 2
},
mobile: {
breakpoint: { max: 464, min: 0 },
items: 1
}
};
<Carousel responsive={responsive}>
<div>Item 1</div>
<div>Item 2</div>
<div>Item 3</div>
<div>Item 4</div>
</Carousel>;
import Carousel from "react-multi-carousel";
import "react-multi-carousel/lib/styles.css";
const responsive = {
desktop: {
breakpoint: { max: 3000, min: 1024 },
items: 3,
slidesToSlide: 3 // optional, default to 1.
},
tablet: {
breakpoint: { max: 1024, min: 464 },
items: 2,
slidesToSlide: 2 // optional, default to 1.
},
mobile: {
breakpoint: { max: 464, min: 0 },
items: 1,
slidesToSlide: 1 // optional, default to 1.
}
};
<Carousel
swipeable={false}
draggable={false}
showDots={true}
responsive={responsive}
ssr={true} // means to render carousel on server-side.
infinite={true}
autoPlay={this.props.deviceType !== "mobile" ? true : false}
autoPlaySpeed={1000}
keyBoardControl={true}
customTransition="all .5"
transitionDuration={500}
containerClass="carousel-container"
removeArrowOnDeviceType={["tablet", "mobile"]}
deviceType={this.props.deviceType}
dotListClass="custom-dot-list-style"
itemClass="carousel-item-padding-40-px"
>
<div>Item 1</div>
<div>Item 2</div>
<div>Item 3</div>
<div>Item 4</div>
</Carousel>;
You can pass your own custom arrows to make it the way you want, the same for the position. For example, add media query for the arrows to go under when on smaller screens.
Your custom arrows will receive a list of props/state that's passed back by the carousel such as the currentSide, is dragging or swiping in progress.
const CustomRightArrow = ({ onClick, ...rest }) => {
const {
onMove,
carouselState: { currentSlide, deviceType }
} = rest;
// onMove means if dragging or swiping in progress.
return <button onClick={() => onClick()} />;
};
<Carousel customRightArrow={<CustomRightArrow />} />;
This is very useful if you don't want the dots, or arrows and you want to fully customize the control functionality and styling yourself.
const ButtonGroup = ({ next, previous, goToSlide, ...rest }) => {
const { carouselState: { currentSlide } } = rest;
return (
<div className="carousel-button-group"> // remember to give it position:absolute
<ButtonOne className={currentSlide === 0 ? 'disable' : ''} onClick={() => previous()} />
<ButtonTwo onClick={() => next()} />
<ButtonThree onClick={() => goToSlide(currentSlide + 1)}> Go to any slide </ButtonThree>
</div>
);
};
<Carousel arrows={false} customButtonGroup={<ButtonGroup />}>
<ItemOne>
<ItemTwo>
</Carousel>
Passing this props would render the button group outside of the Carousel container. This is done using React.fragment
<div className='my-own-custom-container'>
<Carousel arrows={false} renderButtonGroupOutside={true} customButtonGroup={<ButtonGroup />}>
<ItemOne>
<ItemTwo>
</Carousel>
</div>
You can pass your own custom dots to replace the default one.
Custom dots can also be a copy or an image of your carousel item. See example in this one
The codes for this example
You custom dots will receive a list of props/state that's passed back by the carousel such as the currentSide, is dragging or swiping in progress.
const CustomDot = ({ onClick, ...rest }) => {
const {
onMove,
index,
active,
carouselState: { currentSlide, deviceType }
} = rest;
const carouselItems = [CarouselItem1, CaourselItem2, CarouselItem3];
// onMove means if dragging or swiping in progress.
// active is provided by this lib for checking if the item is active or not.
return (
<button
className={active ? "active" : "inactive"}
onClick={() => onClick()}
>
{React.Children.toArray(carouselItems)[index]}
</button>
);
};
<Carousel showDots customDot={<CustomDot />}>
{carouselItems}
</Carousel>;
Passing this props would render the dots outside of the Carousel container. This is done using React.fragment
<div className='my-own-custom-container'>
<Carousel arrows={false} showDots={true} renderDotsOutside={renderButtonGroupOutside}>
<ItemOne>
<ItemTwo>
</Carousel>
</div>
Shows the next items partially, this is very useful if you want to indicate to the users that this carousel component is swipable, has more items behind it.
This is different from the "centerMode" prop, as it only shows the next items. For the centerMode, it shows both.
const responsive = {
desktop: {
breakpoint: { max: 3000, min: 1024 },
items: 3,
partialVisibilityGutter: 40 // this is needed to tell the amount of px that should be visible.
},
tablet: {
breakpoint: { max: 1024, min: 464 },
items: 2,
partialVisibilityGutter: 30 // this is needed to tell the amount of px that should be visible.
},
mobile: {
breakpoint: { max: 464, min: 0 },
items: 1,
partialVisibilityGutter: 30 // this is needed to tell the amount of px that should be visible.
}
}
<Carousel partialVisible={true} responsive={responsive}>
<ItemOne />
<ItemTwo />
</Carousel>
Shows the next items and previous items partially.
<Carousel centerMode={true} />
This is a callback function that is invoked each time when there has been a sliding.
<Carousel
afterChange={(previousSlide, { currentSlide, onMove }) => {
doSpeicalThing();
}}
/>
This is a callback function that is invoked each time before a sliding.
<Carousel
beforeChange={(nextSlide, { currentSlide, onMove }) => {
doSpeicalThing();
}}
/>
They are very useful in the following cases:
<Carousel
beforeChange={() => this.setState({ isMoving: true })}
afterChange={() => this.setState({ isMoving: false })}
>
<a
onClick={e => {
if (this.state.isMoving) {
e.preventDefault();
}
}}
href="https://w3js.com"
>
Click me
</a>
</Carousel>
<Carousel beforeChange={nextSlide => this.setState({ nextSlide: nextSlide })}>
<div>Initial slide</div>
<div
onClick={() => {
if (this.state.nextSlide === 1) {
doVerySpecialThing();
}
}}
>
Second slide
</div>
</Carousel>
When calling the goToSlide
function on a Carousel the callbacks will be run by default. You can skip all or individul callbacks by passing a second parameter to goToSlide.
this.Carousel.goToSlide(1, true); // Skips both beforeChange and afterChange
this.Carousel.goToSlide(1, { skipBeforeChange: true }); // Skips only beforeChange
this.Carousel.goToSlide(1, { skipAfterChange: true }); // Skips only afterChange
Go to slide on click and make the slide a current slide.
<Carousel focusOnSelect={true} />
<Carousel ref={(el) => (this.Carousel = el)} arrows={false} responsive={responsive}>
<ItemOne />
<ItemTwo />
</Carousel>
<button onClick={() => {
const nextSlide = this.Carousel.state.currentSlide + 1;
// this.Carousel.next()
// this.Carousel.goToSlide(nextSlide)
}}>Click me</button>
This is very useful when you are fully customizing the control functionality by yourself like this one
For example if you give to your carousel item padding left and padding right 20px. And you have 5 items in total, you might want to do the following:
<Carousel ref={el => (this.Carousel = el)} additionalTransfrom={-20 * 5} /> // it needs to be a negative number
Name | Type | Default | Description |
---|---|---|---|
responsive | object | {} | Numbers of slides to show at each breakpoint |
deviceType | string | '' | Only pass this when use for server-side rendering, what to pass can be found in the example folder |
ssr | boolean | false | Use in conjunction with responsive and deviceType prop |
slidesToSlide | Number | 1 | How many slides to slide. |
draggable | boolean | true | Optionally disable/enable dragging on desktop |
swipeable | boolean | true | Optionally disable/enable swiping on mobile |
arrows | boolean | true | Hide/Show the default arrows |
renderArrowsWhenDisabled | boolean | false | Allow for the arrows to have a disabled attribute instead of not showing them |
removeArrowOnDeviceType | string or array | '' | Hide the default arrows at different break point, should be used with responsive props. Value could be mobile or ['mobile', 'tablet'], can be a string or array |
customLeftArrow | jsx | null | Replace the default arrow with your own |
customRightArrow | jsx | null | Replace the default arrow with your own |
customDot | jsx | null | Replace the default dots with your own |
customButtonGroup | jsx | null | Fully customize your own control functionality if you don't want arrows or dots |
infinite | boolean | false | Enables infinite scrolling in both directions. Carousel items are cloned in the DOM to achieve this. |
minimumTouchDrag | number | 50 | The amount of distance to drag / swipe in order to move to the next slide. |
afterChange | function | null | A callback after sliding everytime. |
beforeChange | function | null | A callback before sliding everytime. |
sliderClass | string | 'react-multi-carousel-track' | CSS class for inner slider div, use this to style your own track list. |
itemClass | string | '' | CSS class for carousel item, use this to style your own Carousel item. For example add padding-left and padding-right |
containerClass | string | 'react-multi-carousel-list' | Use this to style the whole container. For example add padding to allow the "dots" or "arrows" to go to other places without being overflown. |
dotListClass | string | 'react-multi-carousel-dot-list' | Use this to style the dot list. |
keyBoardControl | boolean | true | Use keyboard to navigate to next/previous slide |
autoPlay | boolean | false | Auto play |
autoPlaySpeed | number | 3000 | The unit is ms |
showDots | boolean | false | Hide the default dot list |
renderDotsOutside | boolean | false | Show dots outside of the container |
partialVisible | boolean | string | false |
customTransition | string | transform 300ms ease-in-out | Configure your own anaimation when sliding |
transitionDuration | `number | 300 | The unit is ms, if you are using customTransition, make sure to put the duration here as this is needed for the resizing to work. |
focusOnSelect | boolean | false | Go to slide on click and make the slide a current slide. |
centerMode | boolean | false | Shows the next items and previous items partially. |
additionalTransfrom | number | 0 | additional transfrom to the current one. |
shouldResetAutoplay | boolean | true | resets autoplay when clicking next, previous button and the dots |
rewind | boolean | false | if infinite is not enabled and autoPlay explicitly is, this option rewinds the carousel when the end is reached (Lightweight infinite mode alternative without cloning). |
rewindWithAnimation | boolean | false | when rewinding the carousel back to the beginning, this decides if the rewind process should be instant or with transition. |
rtl | boolean | false | Sets the carousel direction to be right to left |
Please read https://github.com/YIZHUANG/react-multi-carousel/blob/master/contributing.md
Submit an issue for feature request or submit a pr.
Author: YIZHUANG
Source Code: https://github.com/YIZHUANG/react-multi-carousel
License: MIT license
1672797180
* If you want to get very meta (not facebook), Ultra can be viewed as a tiny bridge to utilise native browser features 🌐 whilst using popular front-end libraries. 🧰
Here's a basic Ultra project to set you on your way.
deno run -A -r https://deno.land/x/ultra/create.ts
Ultra allows you to write web-apps which massively simplify your tool chain. You write ESM, we ship ESM. Where we are going, there is no "bundling" (it feels so 2018 just saying that word).
** Examples include (but not limited to) react-query
twind
stitches
react-router
wouter
mdx
@__@
Does Ultra 'ship js'?
Yes, Ultra creates rich web applications which allow complex client-side routing, allow components to persist through route changes (media players, interactive elements, etc).
Our goal is to both write AND ship source code that works the same way on server/runtime and client. We view the browser as more than just a "target". Browser is life, and javascript is good.
Can I use TypeScript and/or JSX?
If you want.
What native browser features should we all be using more?
Unbundled ESM, service workers, universal import maps, cascading style sheets.
Ultra always has been (and always will be) powered by the following hot-takes:
We have the Discord. Come say 'sup.
The Ultra community welcomes outside contributions. See the Contributor Guidelines for details.
Here some things we are interested in for the future of JS and/or Ultra:
Thank you for going on this journey with us.
Author: Exhibitionist-digital
Source Code: https://github.com/exhibitionist-digital/ultra
License: MIT license
1661451060
This React Server-side optimization library is a configurable ReactJS extension for memoizing react component markup on the server. It also supports component templatization to further caching of rendered markup with more dynamic data. This server-side module intercepts React's instantiateReactComponent module by using a require()
hook and avoids forking React.
React is a best-of-breed UI component framework allowing us to build higher level components that can be shared and reused across pages and apps. React's Virtual DOM offers an excellent development experience, freeing us up from having to manage subtle DOM changes. Most importantly, React offers us a great out-of-the-box isomorphic/universal JavaScript solution. React's renderToString(..)
can fully render the HTML markup of a page to a string on the server. This is especially important for initial page load performance (particularly for mobile users with low bandwidth) and search engine indexing and ranking — both for SEO (search engine optimization) and SEM (search engine marketing).
However, it turns out that React’s server-side rendering can become a performance bottleneck for pages requiring many virtual DOM nodes. On large pages, ReactDOMServer.renderToString(..)
can monopolize the CPU, block node’s event-loop and starve out incoming requests to the server. That’s because for every page request, the entire page needs to be rendered, even fine-grained components — which given the same props, always return the same markup. CPU time is wasted in unnecessarily re-rendering the same components for every page request. Similar to pure functions in functional programing a pure component will always return the same HTML markup given the same props. Which means it should be possible to memoize (or cache) the rendered results to speed up rendering significantly after the first response.
We also wanted the ability to memoize any pure component, not just those that implement a certain interface. So we created a configurable component caching library that accepts a map of component name to a cacheKey generator function. Application owners can opt into this optimization by specifying the component's name and referencing a cacheKey generator function. The cacheKey generator function returns a string representing all inputs into the component's rendering that is then used to cache the rendered markup. Subsequent renderings of the component with the same name and the same props will hit the cache and return the cached result. This optimization lowers CPU time for each page request and allows more concurrent requests that are not blocked on synchronous renderToString
calls. The CPU profiles we took after before and after applying these optimizations show significant reduction no CPU utilization for each request.
To learn more about why we built this library, check out a talk from the Full Stack meetup from July 2016:
As well as another (lower quality) recording from the San Diego Web Performance meetup from August 2016:
Slide Deck: Hastening React SSR with component memoization and templatization
After peeling through the React codebase we discovered React’s mountComponent function. This is where the HTML markup is generated for a component. We knew that if we could intercept React's instantiateReactComponent module by using a require()
hook we could avoid the need to fork React and inject our optimization. We keep a Least-Recently-Used (LRU) cache that stores the markup of rendered components (replacing the data-reactid appropriately).
We also implemented an enhancement that will templatize the cached rendered markup to allow for more dynamic props. Dynamic props are replaced with template delimiters (i.e. ${ prop_name }) during the react component rendering cycle. The template is them compiled, cached, executed and the markup is handed back to React. For subsequent requests the component's render(..) call is short-circuited with an execution of the cached compiled template.
npm install --save react-ssr-optimization
You should load the module in the first script that's executed by Node, typically index.js
.
In index.js
you will have code that looks something like this:
"use strict";
var componentOptimization = require("react-ssr-optimization");
var keyGenerator = function (props) {
return props.id + ":" + props.name;
};
var componentOptimizationRef = componentOptimization({
components: {
'Component1': keyGenerator,
'Component2': {
cacheKeyGen: keyGenerator,
},
},
lruCacheSettings: {
max: 500, //The maximum size of the cache
}
});
With the cache reference you can also execute helpful operational functions like these:
//can be turned off and on dynamically by calling the enable function.
componentOptimizationRef.enable(false);
// Return an array of the cache entries
componentOptimizationRef.cacheDump();
// Return total length of objects in cache taking into account length options function.
componentOptimizationRef.cacheLength();
// Clear the cache entirely, throwing away all values.
componentOptimizationRef.cacheReset();
Even though pure components ‘should’ always render the same markup structure there are certain props that might be more dynamic than others. Take for example the following simplified product react component.
var React = require('react');
var ProductView = React.createClass({
render: function() {
return (
<div className="product">
<img src={this.props.product.image}/>
<div className="product-detail">
<p className="name">{this.props.product.name}</p>
<p className="description">{this.props.product.description}</p>
<p className="price">Price: ${this.props.selected.price}</p>
<button type="button" onClick={this.addToCart} disabled={this.props.inventory > 0 ? '' : 'disabled'}>
{this.props.inventory ? 'Add To Cart' : 'Sold Out'}
</button>
</div>
</div>
);
}
});
module.exports = ProductView;
This component takes props like product image, name, description, price. If we were to apply the component memoization described above, we’d need a cache large enough to hold all the products. Moreover, less frequently accessed products would likely to have more cache misses. This is why we also added the component templatization feature. This feature requires classifying properties in two different groups:
These attributes are configured in the component caching library, but instead of providing a cacheKey generator function you’d pass in the templateAttrs and cacheAttrs instead. It looks something like this:
var componentOptimization = require("react-ssr-optimization");
componentOptimization({
components: {
"ProductView": {
templateAttrs: ["product.image", "product.name", "product.description", "product.price"],
cacheAttrs: ["product.inventory"]
},
"ProductCallToAction": {
templateAttrs: ["url"],
cacheAttrs: ["availabilityStatus", "isAValidOffer", "maxQuantity", "preorder", "preorderInfo.streetDateType", "puresoi", "variantTypes", "variantUnselectedExp"]
}
}
});
Notice that the template attributes for ProductView are all the dynamic props that would be different for each product. In this example, we also used product.inventory prop as a cache key attribute since the markup changes based on inventory logic to enable the add to cart button. Here is the same product component from above cached as a template.
<div className="product">
<img src=${product_image}/>
<div className="product-detail">
<p className="name">${product_name}</p>
<p className="description">${product_description}</p>
<p className="price">Price: ${selected_price}</p>
<button type="button" onClick={this.addToCart} disabled={this.props.inventory > 0 ? '' : 'disabled'}>
{this.props.inventory ? 'Add To Cart' : 'Sold Out'}
</button>
</div>
</div>
For the given component name, the cache key attributes are used to generate a cache key for the template. For subsequent requests the component’s render is short-circuited with a call to the compiled template.
Here are a set of option that can be passed to the react-ssr-optimization
library:
components
: A required map of components that will be cached and the corresponding function to generate its cache key.key
: a required string name identifying the component. This can be either the name of the component when it extends React.Component
or the displayName
variable.value
: a required function/object which generates a string that will be used as the component's CacheKey. If an object, it can contain the following attributescacheKeyGen
: an optional function which generates a string that will be used as the component's CacheKey. If cacheKeyGen and cacheAttrs are not set, then only one element for the component will exist in the cachetemplateAttrs
: an optional array of strings corresponding to attribute name/key in props that need to be templatized. Each value can have deep paths ex: x.y.zcacheAttrs
: an optional array of attributes to be used for generating a cache key. Can be used in place of cacheKeyGen
.lruCacheSettings
: By default, this library uses a Least Recently Used (LRU) cache to store rendered markup of cached components. As the name suggests, LRU caches will throw out the data that was least recently used. As more components are put into the cache other rendered components will fall out of the cache. Configuring the LRU cache properly is essential for server optimization. Here are the LRU cache configurations you should consider setting:max
: an optional number indicating the maximum size of the cache, checked by applying the length function to all values in the cache. Default value is Infinity
.maxAge
: an optional number indicating the maximum age in milliseconds. Default value is Infinity
.length
: an optional function that is used to calculate the length of stored items. The default is function(){return 1}
.cacheImpl
: an optional config that allows the usage of a custom cache implementation. This will take precedence over the lruCacheSettings
option.disabled
: an optional config indicating that the component caching feature should be disabled after instantiation.eventCallback
: an optional function that is executed for interesting events like cache miss and hits. The function should take an event object function(e){...}
. The event object will have the following properties:type
: the type of event, e.g. "cache".event
: the kind of event, e.g. "miss" for cache events.cmpName
: the component name that this event transpired on, e.g. "Hello World" component.loadTimeNS
: the load time spent loading/generating a value for a cache miss, in nanoseconds. This only returns a value when collectLoadTimeStats
option is enabled.collectLoadTimeStats
: an optional config indicating enabling the loadTimeNS
stat to be calculated and returned in the eventCallback
cache miss events.It is important to note that there are several other independent projects that are endeavoring to solve the React server-side rendering bottleneck. Projects like react-dom-stream and react-server attempt to deal with the synchronous nature of ReactDOM.renderToString by rendering React pages asynchronously and in separate chunks. Streaming and chunking react rendering helps on the server by preventing synchronous render processing from starving out other concurrent requests. Streaming the initial HTML markup also means that browsers can start painting pages earlier (without having to wait for the entire response).
These approaches help improve user perceived performance since content can be painted sooner on the screen. But whether rendering is done synchronously or asynchronously, the total CPU time remains the same since the same amount of work still needs to be done. In contrast, component memoization and templatization reduces the total amount of CPU time for subsequent requests that re-render the same components again. These rendering optimizations can be used in conjunction with other performance enhancements like asynchronous rendering.
Author: Walmartlabs
Source Code: https://github.com/walmartlabs/react-ssr-optimization
License: Apache-2.0 license
1661431519
React server-side rendering support for Fastify with Next.js framework. This library is for letting an existing Fastify server utilize NextJS, not for replacing NextJS' internal webserver with Fastify.
npm i @fastify/nextjs next react react-dom
Since Next.js needs some time to be ready on the first launch, you must declare your routes inside the after
callback, after you registered the plugin. The plugin will expose the next
API in Fastify that will handle the rendering for you.
const fastify = require('fastify')()
fastify
.register(require('@fastify/nextjs'))
.after(() => {
fastify.next('/hello')
})
fastify.listen(3000, err => {
if (err) throw err
console.log('Server listening on http://localhost:3000')
})
All your server rendered pages must be saved in the folder pages
, as you can see in the Next.js documentation.
// /pages/hello.js
export default () => <div>hello world</div>
If you need to pass custom options to next
just pass them to register as second parameter.
fastify.register(require('@fastify/nextjs'), { dev: true })
If you need to handle the render part yourself, just pass a callback to next
:
fastify.next('/hello', (app, req, reply) => {
// your code
// `app` is the Next instance
app.render(req.raw, reply.raw, '/hello', req.query, {})
})
If you need to render with Next.js from within a custom handler, use reply.nextRender
app.setErrorHandler((err, req, reply) => {
return reply.nextRender('/a')
})
If you need to render a Next.js error page, use reply.nextRenderError
app.setErrorHandler((err, req, reply) => {
return reply.status(err.statusCode || 500).nextRenderError(err)
})
If you need to handle HEAD routes, you can define the HTTP method:
fastify.next('/api/*', { method: 'GET' });
fastify.next('/api/*', { method: 'HEAD' });
By default plugin handle route ${basePath}/_next/*
and forward to Next.js.
If you have custom preprocessing for _next/*
requests, you can prevent this this handling with noServeAssets: true
property for plugin options:
fastify
.register(require('@fastify/nextjs'), {
noServeAssets: true
})
.after(() => {
fastify.next(`${process.env.BASE_PATH || ''}/_next/*`, (app, req, reply) => {
// your code
app.getRequestHandler()(req.raw, reply.raw).then(() => {
reply.sent = true
})
})
})
The plugin includes under-pressure, which can be configured by providing an underPressure
property to the plugin options.
Using under-pressure
allows implementing a circuit breaker that returns an error when the health metrics are not respected. Because React server side rendering is a blocking operation for the Node.js server, returning an error to the client allows signalling that the server is under too much load.
The available options are the same as those accepted by under-pressure
.
For example:
fastify.register(require('@fastify/nextjs'), {
underPressure: {
exposeStatusRoute: true
}
})
underPressure
- bool|object
under-pressure
is not registeredunder-pressure
is registered with default optionsunder-pressure
is registered with the provided optionsIf you want to share custom objects (for example other fastify plugin instances - e.g. @fastify/redis) across the server/client with each page request, you can use the onRequest
hook to add it to the request object. Here is an example on how to do it:
const Fastify = require('fastify')
const FastifyRedis = require('@fastify/redis')
const FastifyNextJS = require('@fastify/nextjs')
const fastify = Fastify()
fastify.register(FastifyRedis, { host: '127.0.0.1' })
fastify.register(FastifyNextJS)
fastify.register(function(instance) {
// for performance reasons we do not want it to run on every request
// only the nextjs one should run
instance.addHook('onRequest', function(request, reply, done) {
// define a custom property on the request
request.raw.customProperty = { hello: "world" }
// OR make the instance of @fastify/redis available in the request
request.raw.redisInstance = instance.redis
done()
})
instance.next('/', function(app, request, reply) {
// your custom property containing the object will be available here
// request.raw.customProperty
// OR the redis instance
// request.raw.redisInstance
app.render(request.raw, reply.raw, '/hello', request.query, {})
})
}, { prefix: '/hello' })
In the example above we made the customProperty
and redisInstance
accessible in every request that is made to the server. On the client side it can be accessed like in this example:
const CustomPropPage = ({ cp, ri }) => <div>custom property value: {cp} | redis instance: {ri}</div>;
export default CustomPropPage;
export const getServerSideProps = async function (ctx) {
return {
props: {
cp: ctx.req.customProperty,
ri: ctx.req.redisInstance,
}
};
};
The default timeout for plugins in Fastify is 10000ms, which can be a problem for huge Next.js Projects where the initial build time is higher than that. Usually, you will get an error like this:
Error: ERR_AVVIO_PLUGIN_TIMEOUT: plugin did not start in time: /app/node_modules/@fastify/nextjs/index.js. You may have forgotten to call 'done' function or to resolve a Promise
The workaround or fix is to increase the plugin timeout:
const isDev = process.env.NODE_ENV !== 'production';
const fastify = Fastify({ pluginTimeout: isDev ? 120_000 : undefined });
CI currently runs npm@6 so when upgrading packages, please use this version.
This project is kindly sponsored by:
Author: fastify
Source Code: https://github.com/fastify/fastify-nextjs
License: MIT license
1660717380
Flareact is an edge-rendered React framework powered by Cloudflare Workers.
It's inspired by Next.js.
That means it supports nice API patterns like:
getEdgeProps
However, it's brand new! So it's also a bunch of these things:
Author: flareact
Source Code: https://github.com/flareact/flareact
License: MIT license
1659792900
Modular framework for universal React applications
✈️ Universal
Creates SSR React
applications - includes solid server with metrics, health checks and graceful degradation support
💉 Dependency Injection
Provides simple and powerful DI system, inspired by Angular
and Nest.js
best practices
🧩 Modular
Every application build from list of feature modules - doing one thing right!
⚡ Fast and lightweight
Enforces best web-performance techniques - resources preloading and inlining, lazy hydration 🚀, modern ES bundles, tree-shakable libraries
🔗 Chain of commands
Elegant pattern for complete control over application life-cycle - predictable flow for every HTTP request into application, running async actions in parallel, limits the duration of server-side actions
🧱 Micro Frontends
Heavily integrated solution for Micro Frontends with SSR and Module Federation
🛠️ Tooling
Functional CLI for generating, develop, analyze, and bundling tramvai
applications - powered by webpack@5
🧪 Testing
Complete set of unit and integration testing utilites - powered by jest
and testing-library
🕊️ Migrations
Automatic migrations with jscodeshift
codemodes
Generate new application
npm init @tramvai my-awesome-app # or npx @tramvai/create my-awesome-app
Run development server
cd my-awesome-app && npm start
application will be available at http://localhost:3000/
Author: Tinkoff
Source Code: https://github.com/Tinkoff/tramvai
License: Apache-2.0 license
1657871479
Here's a modern portfolio you could have to impress an employer 🚀
Build it with me & add your own ✨sass ✨ to it and make it your own 🔥
#nestjs #typescript #cms #ssr #webdevelopment #frontend #fullstackdeveloper #webdev
1657523040
graphql-codegen-apollo-next-ssr
Generate apollo code for nextjs ssr
Nextjs recently introduced getServerSideProps
which doesn't allow to use the HOC pattern adopted by the official apollo graphql plugin (based on getInitialProps
). At the same time the SSR method offered by apollo client (getDataFromTree
) enforces the react app to render multiple times in order to collect and fetch all the relevant queries. By declaring a top level query we can save rendering time and provide a simpler pattern which works with getServerSideProps
. This plugin generates a typesafe version of getServerSideProps
for each server query, and the corresponding HOC to wrap the react component returning the cache result. The limitation/advantage of this solution is that all the queries embedded into inner react components are ignored, unless covered by the top level cache.
excludePatterns
(default: null): regexp to exclude operation namesexcludePatternsOptions
(default: ''): regexp flags to exclude operation namesreactApolloVersion
(default: 2): apollo client versionapolloCacheImportFrom
(default: apollo-cache-inmemory): apollo-cache-inmemory dependencyapolloImportFrom
(default: apollo-client v2 or @apollo/client v3): apollo client dependencyapolloCacheImportFrom
(default: apollo-cache-inmemory v2 or @apollo/client v3): apollo cache client dependencyapolloStateKey
(default: apolloState): Key used for storing Apollo statewithHooks
(default: false): Customized the output by enabling/disabling the generated React Hooks.withHOC
(default: true): Customized the output by enabling/disabling the HOC.customImports
(default: ''): full custom import declarationcontextType
(default: 'any'): Contex type passed to getApolloClientcontextTypeRequired
(default: false): If the context should be required when called from getServerSidePropspre
(default: ''): custom code before each functionpost
(default: ''): custom code after each functionapolloClientInstanceImport
(default: ''): Add apolloClient instance importsreplaceQuery
(default: false): Replace "query" keyword inside the generated operationsreplacePage
(default: false): Replace "page" keyword inside the generated operationsreactImport
(default: import type React from 'react';): custom react importoverwrite: true
schema:
- 'https://myschema/graphql'
documents:
- 'src/**/*.graphql'
generates:
src/@types/codegen/graphql.tsx:
plugins:
- 'typescript'
- 'typescript-operations'
- 'typescript-react-apollo'
src/@types/codegen/page.tsx:
config:
documentMode: external
importDocumentNodeExternallyFrom: ./graphql
preset: import-types
presetConfig:
typesPath: ./graphql
plugins:
- ./build/src/index.js
hooks:
afterAllFileWrite:
- prettier --write
Download Details:
Author: correttojs
Source Code: https://github.com/correttojs/graphql-codegen-apollo-next-ssr
License:
#nextjs #react #javascript #graphql #apollo #ssr
1653448980
Hyperapp Render
This library is allowing you to render Hyperapp views to an HTML string.
Our first example is an interactive app from which you can generate an HTML markup. Go ahead and try it online.
import { h } from 'hyperapp'
import { renderToString } from 'hyperapp-render'
const state = {
text: 'Hello'
}
const actions = {
setText: text => ({ text })
}
const view = (state, actions) => (
<main>
<h1>{state.text.trim() === '' ? '👋' : state.text}</h1>
<input value={state.text} oninput={e => actions.setText(e.target.value)} />
</main>
)
const html = renderToString(view(state, actions))
console.log(html) // => <main><h1>Hello</h1><input value="Hello"/></main>
Looking for a boilerplate? Try Hyperapp Starter with pre-configured server-side rendering and many more.
Using npm:
npm install hyperapp-render --save
Or using a CDN like unpkg.com or jsDelivr with the following script tag:
<script src="https://unpkg.com/hyperapp-render"></script>
You can find the library in window.hyperappRender
.
We support all ES5-compliant browsers, including Internet Explorer 9 and above, but depending on your target browsers you may need to include polyfills for Set
and Map
before any other code.
The library provides two functions which you can use depending on your needs or personal preferences:
import { renderToString, renderToStream } from 'hyperapp-render'
renderToString(<Component />) // => <string>
renderToString(view(state, actions)) // => <string>
renderToString(view, state, actions) // => <string>
renderToStream(<Component />) // => <stream.Readable> => <string>
renderToStream(view(state, actions)) // => <stream.Readable> => <string>
renderToStream(view, state, actions) // => <stream.Readable> => <string>
Note: renderToStream
is available from Node.js environment only (v6 or newer).
You can use renderToString
function to generate HTML on the server and send the markup down on the initial request for faster page loads and to allow search engines to crawl your pages for SEO purposes.
If you call hyperapp.app()
on a node that already has this server-rendered markup, Hyperapp will preserve it and only attach event handlers, allowing you to have a very performant first-load experience.
The renderToStream
function returns a Readable stream that outputs an HTML string. The HTML output by this stream is exactly equal to what renderToString
would return. By using this function you can reduce TTFB and improve user experience even more.
The library automatically escapes text content and attribute values of virtual DOM nodes to protect your application against XSS attacks. However, it is not safe to allow "user input" for node names or attribute keys:
const Node = 'div onclick="alert()"'
renderToString(<Node title="XSS">Hi</Node>)
// => <div onclick="alert()" title="XSS">Hi</div>
const attributes = { 'onclick="alert()" title': 'XSS' }
renderToString(<div {...attributes}>Hi</div>)
// => <div onclick="alert()" title="XSS">Hi</div>
const userInput = '<script>alert()</script>'
renderToString(<div title="XSS" innerHTML={userInput}>Hi</div>)
// => <div title="XSS"><script>alert()</script></div>
Author: Kriasoft
Source Code: https://github.com/kriasoft/hyperapp-render
License: MIT license
1652739360
react-snap
Pre-renders a web app into static HTML. Uses Headless Chrome to crawl all available links starting from the root. Heavily inspired by prep and react-snapshot, but written from scratch. Uses best practices to get the best loading performance.
react-snapshot
but works with any technology (e.g., Vue).Zero configuration is the main feature. You do not need to worry about how it works or how to configure it. But if you are curious, here are details.
Install:
yarn add --dev react-snap
Change package.json
:
"scripts": {
"postbuild": "react-snap"
}
Change src/index.js
(for React 16+):
import { hydrate, render } from "react-dom";
const rootElement = document.getElementById("root");
if (rootElement.hasChildNodes()) {
hydrate(<App />, rootElement);
} else {
render(<App />, rootElement);
}
That's it!
To do hydration in Preact you need to use this trick:
const rootElement = document.getElementById("root");
if (rootElement.hasChildNodes()) {
preact.render(<App />, rootElement, rootElement.firstElementChild);
} else {
preact.render(<App />, rootElement);
}
Install:
yarn add --dev react-snap
Change package.json
:
"scripts": {
"postbuild": "react-snap"
},
"reactSnap": {
"source": "dist",
"minifyHtml": {
"collapseWhitespace": false,
"removeComments": false
}
}
Or use preserveWhitespace: false
in vue-loader
.
source
- output folder of webpack or any other bundler of your choice
Read more about minifyHtml
caveats in #142.
Example: Switch from prerender-spa-plugin to react-snap
Only works with routing strategies using the HTML5 history API. No hash(bang) URLs.
Vue uses the data-server-rendered
attribute on the root element to mark SSR generated markup. When this attribute is present, the VDOM rehydrates instead of rendering everything from scratch, which can result in a flash.
This is a small hack to fix rehydration problem:
window.snapSaveState = () => {
document.querySelector("#app").setAttribute("data-server-rendered", "true");
};
window.snapSaveState
is a callback to save the state of the application at the end of rendering. It can be used for Redux or async components. In this example, it is repurposed to alter the DOM, this is why I call it a "hack." Maybe in future versions of react-snap
, I will come up with better abstractions or automate this process.
Make sure to use replace: false
for root components
If you need to pass some options for react-snap
, you can do this in your package.json
like this:
"reactSnap": {
"inlineCss": true
}
Not all options are documented yet, but you can check defaultOptions
in index.js
.
Experimental feature - requires improvements.
react-snap
can inline critical CSS with the help of minimalcss and full CSS will be loaded in a non-blocking manner with the help of loadCss.
Use inlineCss: true
to enable this feature.
TODO: as soon as this feature is stable, it should be enabled by default.
Also known as code splitting, dynamic import (TC39 proposal), "chunks" (which are loaded on demand), "layers", "rollups", or "fragments". See: Guide To JavaScript Async Components
An async component (in React) is a technique (typically implemented as a higher-order component) for loading components on demand with the dynamic import
operator. There are a lot of solutions in this field. Here are some examples:
It is not a problem to render async components with react-snap
, the tricky part happens when a prerendered React application boots and async components are not loaded yet, so React draws the "loading" state of a component, and later when the component is loaded, React draws the actual component. As a result, the user sees a flash:
100% /----| |----
/ | |
/ | |
/ | |
/ |____|
visual progress /
/
0% -------------/
Usually a code splitting library provides an API to handle it during SSR, but as long as "real" SSR is not used in react-snap - the issue surfaces, and there is no simple way to fix it.
import loadable from "@loadable/component";
import { PrerenderedComponent } from "react-prerendered-component";
const prerenderedLoadable = dynamicImport => {
const LoadableComponent = loadable(dynamicImport);
return React.memo(props => (
// you can use the `.preload()` method from react-loadable or react-imported-component`
<PrerenderedComponent live={LoadableComponent.load()}>
<LoadableComponent {...props} />
</PrerenderedComponent>
));
};
const MyComponent = prerenderedLoadable(() => import("./MyComponent"));
MyComponent
will use prerendered HTML to prevent the page content from flashing (it will find the required piece of HTML using an id
attribute generated by PrerenderedComponent
and inject it using dangerouslySetInnerHTML
).
React.lazy
, but React.lazy
doesn't provide a prefetch method (load
or preload
), so you need to implement it yourself (this can be a fragile solution).const prefetchMap = new WeakMap();
const prefetchLazy = LazyComponent => {
if (!prefetchMap.has(LazyComponent)) {
prefetchMap.set(LazyComponent, LazyComponent._ctor());
}
return prefetchMap.get(LazyComponent);
};
const prerenderedLazy = dynamicImport => {
const LazyComponent = React.lazy(dynamicImport);
return React.memo(props => (
<PrerenderedComponent live={prefetchLazy(LazyComponent)}>
<LazyComponent {...props} />
</PrerenderedComponent>
));
};
const MyComponent = prerenderedLazy(() => import("./MyComponent"));
loadable-components
2.2.3 (current is >5). The old version of loadable-components
can solve this issue for a "snapshot" setup:import { loadComponents, getState } from "loadable-components";
window.snapSaveState = () => getState();
loadComponents()
.then(() => hydrate(AppWithRouter, rootElement))
.catch(() => render(AppWithRouter, rootElement));
If you don't use babel plugin, don't forget to provide modules:
const NotFoundPage = loadable(() => import("src/pages/NotFoundPage"), {
modules: ["NotFoundPage"]
});
loadable-components
were deprecated in favour of@loadable/component
, but@loadable/component
droppedgetState
. So if you want to useloadable-components
you can use old version (2.2.3
latest version at the moment of writing) or you can wait untilReact
will implement proper handling of this case with asynchronous rendering andReact.lazy
.
See: Redux Server Rendering Section
// Grab the state from a global variable injected into the server-generated HTML
const preloadedState = window.__PRELOADED_STATE__;
// Allow the passed state to be garbage-collected
delete window.__PRELOADED_STATE__;
// Create Redux store with initial state
const store = createStore(counterApp, preloadedState || initialState);
// Tell react-snap how to save Redux state
window.snapSaveState = () => ({
__PRELOADED_STATE__: store.getState()
});
Caution: as of now, only basic "JSON" data types are supported: e.g. Date
, Set
, Map
, and NaN
won't be handled correctly (#54).
You can block all third-party requests with the following config:
"skipThirdPartyRequests": true
react-snap
can capture all AJAX requests. It will store json
requests in the domain in window.snapStore[<path>]
, where <path>
is the path of the request.
Use "cacheAjaxRequests": true
to enable this feature.
This feature can conflict with the browser cache. See #197 for details. You may want to disable cache in this case: "puppeteer": { "cache": false }
.
By default, create-react-app
uses index.html
as a fallback:
navigateFallback: publicUrl + '/index.html',
You need to change this to an un-prerendered version of index.html
- 200.html
, otherwise you will see index.html
flash on other pages (if you have any). See Configure sw-precache without ejecting for more information.
Puppeteer (Headless Chrome) may fail due to sandboxing issues. To get around this, you may use:
"puppeteerArgs": ["--no-sandbox", "--disable-setuid-sandbox"]
Read more about puppeteer troubleshooting.
"inlineCss": true
sometimes causes problems in containers.
To run react-snap
inside docker
with Alpine, you might want to use a custom Chromium executable. See #93 and #132.
heroku buildpacks:add https://github.com/jontewks/puppeteer-heroku-buildpack.git
heroku buildpacks:add heroku/nodejs
heroku buildpacks:add https://github.com/heroku/heroku-buildpack-static.git
See this PR. At the moment of writing, Heroku doesn't support HTTP/2.
Semantic UI is defined over class substrings that contain spaces (e.g., "three column"). Sorting the class names, therefore, breaks the styling. To get around this, use the following configuration:
"minifyHtml": { "sortClassName": false }
From version 1.17.0
, sortClassName
is false
by default.
Once JS on the client is loaded, components initialized and your JSS styles are regenerated, it's a good time to remove server-side generated style tag in order to avoid side-effects
This basically means that JSS doesn't support rehydration
. See #99 for a possible solutions.
react-router
v3See #135.
You can use navigator.userAgent == "ReactSnap"
to do some checks in the app code while snapping—for example, if you use an absolute path for your API AJAX request. While crawling, however, you should request a specific host.
Example code:
const BASE_URL =
process.env.NODE_ENV == "production" && navigator.userAgent != "ReactSnap"
? "/"
: "http://xxx.yy/rest-api";
See alternatives.
Please provide a reproducible demo of a bug and steps to reproduce it. Thanks!
Tweet it, like it, share it, star it. Thank you.
You can also contribute to minimalcss, which is a big part of react-snap
. Also, give it some stars.
Author: Stereobooster
Source Code: https://github.com/stereobooster/react-snap
License: MIT license
1652728500
React ESI: Blazing-fast Server-Side Rendering for React and Next.js
React ESI is a super powerful cache library for vanilla React and Next.js applications, that can make highly dynamic applications as fast as static sites. It provides a straightforward way to boost your application's performance by storing fragments of server-side rendered pages in edge cache servers. It means that after the first rendering, fragments of your pages will be served in a few milliseconds by servers close to your end users! It's a very efficient way to improve the performance and the SEO of your websites; and to dramatically reduce both your hosting costs and the energy consumption of these applications. Help the planet, use React ESI!
Because it is built on top of the Edge Side Includes (ESI) W3C specification, React ESI natively supports most of the well-known cloud cache providers including Cloudflare Workers, Akamai and Fastly. Of course, React ESI also supports the open source Varnish cache server that you can use in your own infrastructure for free (configuration example).
Also, React ESI allows to specify a different Time To Live (TTL) per React component and to generate the corresponding HTML asynchronously using a secure (signed) URL. The cache server fetches and stores in the cache all the needed fragments (the HTML corresponding to every React component), builds the final page and sends it to the browser. React ESI also allows components to (re-)render client-side without any specific configuration.
Schema from The Varnish Book
Discover React ESI in depth with this presentation
Using Yarn:
$ yarn add react-esi
Or using NPM:
$ npm install react-esi
React ESI provides a convenient Higher Order Component that will:
React ESI automatically calls a static async
method named getInitialProps()
to populate the initial props of the component. Server-side, this method can access to the HTTP request and response, for instance, to set the Cache-Control
header, or some cache tags.
These props returned by getInitialProps()
will also be injected in the server-side generated HTML (in a <script>
tag). Client-side the component will reuse the props coming from the server (the method will not be called a second time). If the method hasn't been called server-side, then it will be called client-side the first time the component is mounted.
// pages/index.js
import React from 'react';
import withESI from 'react-esi';
import MyFragment from 'components/MyFragment';
const MyFragmentESI = withESI(MyFragment, 'MyFragment');
// The second parameter is an unique ID identifying this fragment.
// If you use different instances of the same component, use a different ID per instance.
const Index = () => (
<div>
<h1>React ESI demo app</h1>
<MyFragmentESI greeting="Hello!" />
</div>
);
// components/MyFragment.js
import React from 'react';
export default class MyFragment extends React.Component {
render() {
return (
<section>
<h1>A fragment that can have its own TTL</h1>
<div>{this.props.greeting /* access to the props as usual */}</div>
<div>{this.props.dataFromAnAPI}</div>
</section>
);
}
static async getInitialProps({ props, req, res }) {
return new Promise(resolve => {
if (res) {
// Set a TTL for this fragment
res.set('Cache-Control', 's-maxage=60, max-age=30');
}
// Simulate a delay (call to a remote service such as a web API)
setTimeout(
() =>
resolve({
...props, // Props coming from index.js, passed through the internal URL
dataFromAnAPI: 'Hello there'
}),
2000
);
});
}
}
The initial props must be serializable using JSON.stringify()
. Beware Map
, Set
and Symbol
!
Note: for convenience, getInitialProps()
has the same signature than the Next.js one. However, it's a totally independent and standalone implementation (you don't need Next.js to use it).
To serve the fragments, React ESI provides a ready to use controller compatible with Express:
// server.js
import express from 'express';
import { path, serveFragment } from 'react-esi/lib/server';
const server = express();
server.use((req, res, next) => {
// Send the Surrogate-Control header to announce ESI support to proxies (optional with Varnish, depending of your config)
res.set('Surrogate-Control', 'content="ESI/1.0"');
next();
});
server.get(path, (req, res) =>
// "path" default to /_fragment, change it using the REACT_ESI_PATH env var
serveFragment(
req,
res,
// "fragmentID" is the second parameter passed to the "WithESI" HOC, the root component used for this fragment must be returned
fragmentID => require(`./components/${fragmentID}`).default)
);
// ...
// Other Express routes come here
server.listen(80);
Alternatively, here is a full example using a Next.js server:
// server.js
import express from 'express';
import next from 'next';
import { path, serveFragment } from 'react-esi/lib/server';
const port = parseInt(process.env.PORT, 10) || 3000
const dev = process.env.NODE_ENV !== 'production'
const app = next({ dev })
const handle = app.getRequestHandler()
app.prepare().then(() => {
const server = express();
server.use((req, res, next) => {
// Send the Surrogate-Control header to announce ESI support to proxies (optional with Varnish)
res.set('Surrogate-Control', 'content="ESI/1.0"');
next();
});
server.get(path, (req, res) =>
serveFragment(req, res, fragmentID => require(`./components/${fragmentID}`).default)
);
server.get('*', handle); // Next.js routes
server.listen(port, err => {
if (err) throw err;
console.log(`> Ready on http://localhost:${port}`);
});
});
React ESI can be configured using environment variables:
REACT_ESI_SECRET
: a secret key used to sign the fragment URL (default to a random string, it's highly recommended to set it to prevent problems when the server restart, or when using multiple servers)REACT_ESI_PATH
: the internal path used to generate the fragment, should not be exposed publicly (default: /_fragment
)<esi:include>
ElementTo pass attributes to the <esi:include>
element generated by React ESI, pass a prop having the following structure to the HOC:
{
esi: {
attrs: {
alt: "Alternative text",
onerror: "continue"
}
}
}
By default, most cache proxies, including Varnish, never serve a response from the cache if the request contains a cookie. If you test using localhost
or a similar local domain, clear all pre-existing cookies for this origin. If the cookies are expected (e.g.: Google Analytics or ad cookies), then you must configure properly your cache proxy to ignore them. Here are some examples for Varnish.
To allow the client-side app to reuse the props fetched or computed server-side, React ESI injects <script>
tags containing them in the ESI fragments. After the assembling of the page by the cache server, these script tags end up mixed with the legit HTML. These tags are automatically removed from the DOM before the rendering phase.
React ESI plays very well with advanced cache strategies including:
Give them a try!
We love Vue and Nuxt as much as React and Next, so we're a currently porting React ESI for this platform. Contact us if you want to help!
Created by Kévin Dunglas. Sponsored by Les-Tilleuls.coop.
Author: Dunglas
Source Code: https://github.com/dunglas/react-esi
License: MIT license
1651672451
React code splitting made easy. Reduce your bundle size without stress ✂️✨.
npm install @loadable/component
import loadable from '@loadable/component'
const OtherComponent = loadable(() => import('./OtherComponent'))
function MyComponent() {
return (
<div>
<OtherComponent />
</div>
)
}
See the documentation at loadable-components.com for more information about using Loadable Components!
Quicklinks to some of the most-visited pages:
Loadable Components is an MIT-licensed open source project. It's an independent project with ongoing development made possible thanks to the support of these awesome backers. If you'd like to join them, please consider:
Author: Gregberge
Source Code: https://github.com/gregberge/loadable-components
License: MIT License
1649652360
NextJS Project Tutorial - User Profile Page Server Side Rendering (SSR) - 19.
Hey Friends, In this NextJS Project Tutorial, I will be discussing User Profile Page Component and How to render it from server side and why do we need to render user profile page from server side.
1649626980
NextJS Project Tutorial - News or Blog Page SSR - 18.
Hey Friends, In this NextJS Project Tutorial I will be creating a news or discription of our project using server side rendering (SSR).