5️ Techniques to Lazy-Load Website Content for Better 📈 SEO & User Experience

Website speed is a crucial aspect of on page SEO everyone can control. Your goal is to be interactive in under 3 seconds, even on a basic phone over a 3G connection.

However, most web sites have so many requests and large payloads this time limit or budget cannot be achieved. In fact, the average web page takes 22 seconds to load, according to Google’s research.

But what if I told you there is a way to offload or even avoid loading page assets until they are needed?

This can give your website a distinct advantage over your competition because not only will Google like your pages better so will your visitors!

The good news is it takes a little bit of JavaScript and intentional effort to update your site.

And Google’s Search team is all for this technique, called ‘lazy-loading’.

Google’s short guide mentions three primary points:

  • Load Visible Content
  • How to Support Infinite Scrolling and Pagination
  • How to Test Your Implementation

The trick is not hiding content you need indexed from Google. So, they published this helpful, but thin guide.

They have also published a guide on using Intersection Observer to lazy load images and videos until they are in view.

For average websites that are sharing content, like a blog or marketing site, lazy loading is rather simple. But for web applications this can be more complex, and I will dive into some techniques I use to load code as needed to make sure my applications are fast and responsive.

I will review the points Google offers and provide some advice and examples from my own experience using IntersectionObserver and a little on how the History API works.

The lazy loading guidelines dove tail with their recent guidance around single page apps and SEO because it is a good technique to improve your website’s user experience.

For more complex web apps I will touch on a technique to load scripts on demand, rather than up front, which keeps the page from rendering.

Lazy-Load Content Scenarios

Before you add lazy-loading capabilities to your site you should inventory what assets your pages load and determine what is required for the initial experience and what can be loaded once the user starts reading or interacting with the content.

Let me go back to my two primary website scenarios, content and application because the content needs vary, but have overlap.

First, a content or sales focused website. For simplicity I will just call it a blog. These sites typically need HTML, CSS, JavaScript, images and custom fonts. They also tend to be sites you want free organic traffic from Google, which means SEO is important.

Other than making the page load faster you need to make sure the content is accessible to the search engine spider, which won’t execute your JavaScript or scroll the page to drive lazy-loaded content.

Content can refer to media, code or copy (text and markup).

#user experience

What is GEEK

Buddha Community

5️ Techniques to Lazy-Load Website Content for Better 📈 SEO & User Experience

Dotnet Script: Run C# Scripts From The .NET CLI

dotnet script

Run C# scripts from the .NET CLI, define NuGet packages inline and edit/debug them in VS Code - all of that with full language services support from OmniSharp.

NuGet Packages

NameVersionFramework(s)
dotnet-script (global tool)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script (CLI as Nuget)Nugetnet6.0, net5.0, netcoreapp3.1
Dotnet.Script.CoreNugetnetcoreapp3.1 , netstandard2.0
Dotnet.Script.DependencyModelNugetnetstandard2.0
Dotnet.Script.DependencyModel.NugetNugetnetstandard2.0

Installing

Prerequisites

The only thing we need to install is .NET Core 3.1 or .NET 5.0 SDK.

.NET Core Global Tool

.NET Core 2.1 introduced the concept of global tools meaning that you can install dotnet-script using nothing but the .NET CLI.

dotnet tool install -g dotnet-script

You can invoke the tool using the following command: dotnet-script
Tool 'dotnet-script' (version '0.22.0') was successfully installed.

The advantage of this approach is that you can use the same command for installation across all platforms. .NET Core SDK also supports viewing a list of installed tools and their uninstallation.

dotnet tool list -g

Package Id         Version      Commands
---------------------------------------------
dotnet-script      0.22.0       dotnet-script
dotnet tool uninstall dotnet-script -g

Tool 'dotnet-script' (version '0.22.0') was successfully uninstalled.

Windows

choco install dotnet.script

We also provide a PowerShell script for installation.

(new-object Net.WebClient).DownloadString("https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.ps1") | iex

Linux and Mac

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | bash

If permission is denied we can try with sudo

curl -s https://raw.githubusercontent.com/filipw/dotnet-script/master/install/install.sh | sudo bash

Docker

A Dockerfile for running dotnet-script in a Linux container is available. Build:

cd build
docker build -t dotnet-script -f Dockerfile ..

And run:

docker run -it dotnet-script --version

Github

You can manually download all the releases in zip format from the GitHub releases page.

Usage

Our typical helloworld.csx might look like this:

Console.WriteLine("Hello world!");

That is all it takes and we can execute the script. Args are accessible via the global Args array.

dotnet script helloworld.csx

Scaffolding

Simply create a folder somewhere on your system and issue the following command.

dotnet script init

This will create main.csx along with the launch configuration needed to debug the script in VS Code.

.
├── .vscode
│   └── launch.json
├── main.csx
└── omnisharp.json

We can also initialize a folder using a custom filename.

dotnet script init custom.csx

Instead of main.csx which is the default, we now have a file named custom.csx.

.
├── .vscode
│   └── launch.json
├── custom.csx
└── omnisharp.json

Note: Executing dotnet script init inside a folder that already contains one or more script files will not create the main.csx file.

Running scripts

Scripts can be executed directly from the shell as if they were executables.

foo.csx arg1 arg2 arg3

OSX/Linux

Just like all scripts, on OSX/Linux you need to have a #! and mark the file as executable via chmod +x foo.csx. If you use dotnet script init to create your csx it will automatically have the #! directive and be marked as executable.

The OSX/Linux shebang directive should be #!/usr/bin/env dotnet-script

#!/usr/bin/env dotnet-script
Console.WriteLine("Hello world");

You can execute your script using dotnet script or dotnet-script, which allows you to pass arguments to control your script execution more.

foo.csx arg1 arg2 arg3
dotnet script foo.csx -- arg1 arg2 arg3
dotnet-script foo.csx -- arg1 arg2 arg3

Passing arguments to scripts

All arguments after -- are passed to the script in the following way:

dotnet script foo.csx -- arg1 arg2 arg3

Then you can access the arguments in the script context using the global Args collection:

foreach (var arg in Args)
{
    Console.WriteLine(arg);
}

All arguments before -- are processed by dotnet script. For example, the following command-line

dotnet script -d foo.csx -- -d

will pass the -d before -- to dotnet script and enable the debug mode whereas the -d after -- is passed to script for its own interpretation of the argument.

NuGet Packages

dotnet script has built-in support for referencing NuGet packages directly from within the script.

#r "nuget: AutoMapper, 6.1.0"

package

Note: Omnisharp needs to be restarted after adding a new package reference

Package Sources

We can define package sources using a NuGet.Config file in the script root folder. In addition to being used during execution of the script, it will also be used by OmniSharp that provides language services for packages resolved from these package sources.

As an alternative to maintaining a local NuGet.Config file we can define these package sources globally either at the user level or at the computer level as described in Configuring NuGet Behaviour

It is also possible to specify packages sources when executing the script.

dotnet script foo.csx -s https://SomePackageSource

Multiple packages sources can be specified like this:

dotnet script foo.csx -s https://SomePackageSource -s https://AnotherPackageSource

Creating DLLs or Exes from a CSX file

Dotnet-Script can create a standalone executable or DLL for your script.

SwitchLong switchdescription
-o--outputDirectory where the published executable should be placed. Defaults to a 'publish' folder in the current directory.
-n--nameThe name for the generated DLL (executable not supported at this time). Defaults to the name of the script.
 --dllPublish to a .dll instead of an executable.
-c--configurationConfiguration to use for publishing the script [Release/Debug]. Default is "Debug"
-d--debugEnables debug output.
-r--runtimeThe runtime used when publishing the self contained executable. Defaults to your current runtime.

The executable you can run directly independent of dotnet install, while the DLL can be run using the dotnet CLI like this:

dotnet script exec {path_to_dll} -- arg1 arg2

Caching

We provide two types of caching, the dependency cache and the execution cache which is explained in detail below. In order for any of these caches to be enabled, it is required that all NuGet package references are specified using an exact version number. The reason for this constraint is that we need to make sure that we don't execute a script with a stale dependency graph.

Dependency Cache

In order to resolve the dependencies for a script, a dotnet restore is executed under the hood to produce a project.assets.json file from which we can figure out all the dependencies we need to add to the compilation. This is an out-of-process operation and represents a significant overhead to the script execution. So this cache works by looking at all the dependencies specified in the script(s) either in the form of NuGet package references or assembly file references. If these dependencies matches the dependencies from the last script execution, we skip the restore and read the dependencies from the already generated project.assets.json file. If any of the dependencies has changed, we must restore again to obtain the new dependency graph.

Execution cache

In order to execute a script it needs to be compiled first and since that is a CPU and time consuming operation, we make sure that we only compile when the source code has changed. This works by creating a SHA256 hash from all the script files involved in the execution. This hash is written to a temporary location along with the DLL that represents the result of the script compilation. When a script is executed the hash is computed and compared with the hash from the previous compilation. If they match there is no need to recompile and we run from the already compiled DLL. If the hashes don't match, the cache is invalidated and we recompile.

You can override this automatic caching by passing --no-cache flag, which will bypass both caches and cause dependency resolution and script compilation to happen every time we execute the script.

Cache Location

The temporary location used for caches is a sub-directory named dotnet-script under (in order of priority):

  1. The path specified for the value of the environment variable named DOTNET_SCRIPT_CACHE_LOCATION, if defined and value is not empty.
  2. Linux distributions only: $XDG_CACHE_HOME if defined otherwise $HOME/.cache
  3. macOS only: ~/Library/Caches
  4. The value returned by Path.GetTempPath for the platform.

 

Debugging

The days of debugging scripts using Console.WriteLine are over. One major feature of dotnet script is the ability to debug scripts directly in VS Code. Just set a breakpoint anywhere in your script file(s) and hit F5(start debugging)

debug

Script Packages

Script packages are a way of organizing reusable scripts into NuGet packages that can be consumed by other scripts. This means that we now can leverage scripting infrastructure without the need for any kind of bootstrapping.

Creating a script package

A script package is just a regular NuGet package that contains script files inside the content or contentFiles folder.

The following example shows how the scripts are laid out inside the NuGet package according to the standard convention .

└── contentFiles
    └── csx
        └── netstandard2.0
            └── main.csx

This example contains just the main.csx file in the root folder, but packages may have multiple script files either in the root folder or in subfolders below the root folder.

When loading a script package we will look for an entry point script to be loaded. This entry point script is identified by one of the following.

  • A script called main.csx in the root folder
  • A single script file in the root folder

If the entry point script cannot be determined, we will simply load all the scripts files in the package.

The advantage with using an entry point script is that we can control loading other scripts from the package.

Consuming a script package

To consume a script package all we need to do specify the NuGet package in the #loaddirective.

The following example loads the simple-targets package that contains script files to be included in our script.

#load "nuget:simple-targets-csx, 6.0.0"

using static SimpleTargets;
var targets = new TargetDictionary();

targets.Add("default", () => Console.WriteLine("Hello, world!"));

Run(Args, targets);

Note: Debugging also works for script packages so that we can easily step into the scripts that are brought in using the #load directive.

Remote Scripts

Scripts don't actually have to exist locally on the machine. We can also execute scripts that are made available on an http(s) endpoint.

This means that we can create a Gist on Github and execute it just by providing the URL to the Gist.

This Gist contains a script that prints out "Hello World"

We can execute the script like this

dotnet script https://gist.githubusercontent.com/seesharper/5d6859509ea8364a1fdf66bbf5b7923d/raw/0a32bac2c3ea807f9379a38e251d93e39c8131cb/HelloWorld.csx

That is a pretty long URL, so why don't make it a TinyURL like this:

dotnet script https://tinyurl.com/y8cda9zt

Script Location

A pretty common scenario is that we have logic that is relative to the script path. We don't want to require the user to be in a certain directory for these paths to resolve correctly so here is how to provide the script path and the script folder regardless of the current working directory.

public static string GetScriptPath([CallerFilePath] string path = null) => path;
public static string GetScriptFolder([CallerFilePath] string path = null) => Path.GetDirectoryName(path);

Tip: Put these methods as top level methods in a separate script file and #load that file wherever access to the script path and/or folder is needed.

REPL

This release contains a C# REPL (Read-Evaluate-Print-Loop). The REPL mode ("interactive mode") is started by executing dotnet-script without any arguments.

The interactive mode allows you to supply individual C# code blocks and have them executed as soon as you press Enter. The REPL is configured with the same default set of assembly references and using statements as regular CSX script execution.

Basic usage

Once dotnet-script starts you will see a prompt for input. You can start typing C# code there.

~$ dotnet script
> var x = 1;
> x+x
2

If you submit an unterminated expression into the REPL (no ; at the end), it will be evaluated and the result will be serialized using a formatter and printed in the output. This is a bit more interesting than just calling ToString() on the object, because it attempts to capture the actual structure of the object. For example:

~$ dotnet script
> var x = new List<string>();
> x.Add("foo");
> x
List<string>(1) { "foo" }
> x.Add("bar");
> x
List<string>(2) { "foo", "bar" }
>

Inline Nuget packages

REPL also supports inline Nuget packages - meaning the Nuget packages can be installed into the REPL from within the REPL. This is done via our #r and #load from Nuget support and uses identical syntax.

~$ dotnet script
> #r "nuget: Automapper, 6.1.1"
> using AutoMapper;
> typeof(MapperConfiguration)
[AutoMapper.MapperConfiguration]
> #load "nuget: simple-targets-csx, 6.0.0";
> using static SimpleTargets;
> typeof(TargetDictionary)
[Submission#0+SimpleTargets+TargetDictionary]

Multiline mode

Using Roslyn syntax parsing, we also support multiline REPL mode. This means that if you have an uncompleted code block and press Enter, we will automatically enter the multiline mode. The mode is indicated by the * character. This is particularly useful for declaring classes and other more complex constructs.

~$ dotnet script
> class Foo {
* public string Bar {get; set;}
* }
> var foo = new Foo();

REPL commands

Aside from the regular C# script code, you can invoke the following commands (directives) from within the REPL:

CommandDescription
#loadLoad a script into the REPL (same as #load usage in CSX)
#rLoad an assembly into the REPL (same as #r usage in CSX)
#resetReset the REPL back to initial state (without restarting it)
#clsClear the console screen without resetting the REPL state
#exitExits the REPL

Seeding REPL with a script

You can execute a CSX script and, at the end of it, drop yourself into the context of the REPL. This way, the REPL becomes "seeded" with your code - all the classes, methods or variables are available in the REPL context. This is achieved by running a script with an -i flag.

For example, given the following CSX script:

var msg = "Hello World";
Console.WriteLine(msg);

When you run this with the -i flag, Hello World is printed, REPL starts and msg variable is available in the REPL context.

~$ dotnet script foo.csx -i
Hello World
>

You can also seed the REPL from inside the REPL - at any point - by invoking a #load directive pointed at a specific file. For example:

~$ dotnet script
> #load "foo.csx"
Hello World
>

Piping

The following example shows how we can pipe data in and out of a script.

The UpperCase.csx script simply converts the standard input to upper case and writes it back out to standard output.

using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper());
}

We can now simply pipe the output from one command into our script like this.

echo "This is some text" | dotnet script UpperCase.csx
THIS IS SOME TEXT

Debugging

The first thing we need to do add the following to the launch.config file that allows VS Code to debug a running process.

{
    "name": ".NET Core Attach",
    "type": "coreclr",
    "request": "attach",
    "processId": "${command:pickProcess}"
}

To debug this script we need a way to attach the debugger in VS Code and the simplest thing we can do here is to wait for the debugger to attach by adding this method somewhere.

public static void WaitForDebugger()
{
    Console.WriteLine("Attach Debugger (VS Code)");
    while(!Debugger.IsAttached)
    {
    }
}

To debug the script when executing it from the command line we can do something like

WaitForDebugger();
using (var streamReader = new StreamReader(Console.OpenStandardInput()))
{
    Write(streamReader.ReadToEnd().ToUpper()); // <- SET BREAKPOINT HERE
}

Now when we run the script from the command line we will get

$ echo "This is some text" | dotnet script UpperCase.csx
Attach Debugger (VS Code)

This now gives us a chance to attach the debugger before stepping into the script and from VS Code, select the .NET Core Attach debugger and pick the process that represents the executing script.

Once that is done we should see our breakpoint being hit.

Configuration(Debug/Release)

By default, scripts will be compiled using the debug configuration. This is to ensure that we can debug a script in VS Code as well as attaching a debugger for long running scripts.

There are however situations where we might need to execute a script that is compiled with the release configuration. For instance, running benchmarks using BenchmarkDotNet is not possible unless the script is compiled with the release configuration.

We can specify this when executing the script.

dotnet script foo.csx -c release

 

Nullable reference types

Starting from version 0.50.0, dotnet-script supports .Net Core 3.0 and all the C# 8 features. The way we deal with nullable references types in dotnet-script is that we turn every warning related to nullable reference types into compiler errors. This means every warning between CS8600 and CS8655 are treated as an error when compiling the script.

Nullable references types are turned off by default and the way we enable it is using the #nullable enable compiler directive. This means that existing scripts will continue to work, but we can now opt-in on this new feature.

#!/usr/bin/env dotnet-script

#nullable enable

string name = null;

Trying to execute the script will result in the following error

main.csx(5,15): error CS8625: Cannot convert null literal to non-nullable reference type.

We will also see this when working with scripts in VS Code under the problems panel.

image

Download Details:
Author: filipw
Source Code: https://github.com/filipw/dotnet-script
License: MIT License

#dotnet  #aspdotnet  #csharp 

How To Create User-Generated Content? [A Simple Guide To Grow Your Brand]

This is image title

In this digital world, online businesses aspire to catch the attention of users in a modern and smarter way. To achieve it, they need to traverse through new approaches. Here comes to spotlight is the user-generated content or UGC.

What is user-generated content?
“ It is the content by users for users.”

Generally, the UGC is the unbiased content created and published by the brand users, social media followers, fans, and influencers that highlight their experiences with the products or services. User-generated content has superseded other marketing trends and fallen into the advertising feeds of brands. Today, more than 86 percent of companies use user-generated content as part of their marketing strategy.

In this article, we have explained the ten best ideas to create wonderful user-generated content for your brand. Let’s start without any further ado.

  1. Content From Social Media Platforms
    In the year 2020, there are 3.81 million people actively using social media around the globe. That is the reason social media content matters. Whenever users look at the content on social media that is posted by an individual, then they may be influenced by their content. Perhaps, it can be used to gain more customers or followers on your social media platforms.

This is image title

Generally, social media platforms help the brand to generate content for your users. Any user content that promotes your brand on the social media platform is the user-generated content for your business. When users create and share content on social media, they get 28% higher engagement than a standard company post.

Furthermore, you can embed your social media feed on your website also. you can use the Social Stream Designer WordPress plugin that will integrate various social media feeds from different social media platforms like Facebook, Twitter, Instagram, and many more. With this plugin, you can create a responsive wall on your WordPress website or blog in a few minutes. In addition to this, the plugin also provides more than 40 customization options to make your social stream feeds more attractive.

  1. Consumer Survey
    The customer survey provides powerful insights you need to make a better decision for your business. Moreover, it is great user-generated content that is useful for identifying unhappy consumers and those who like your product or service.

In general, surveys can be used to figure out attitudes, reactions, to evaluate customer satisfaction, estimate their opinions about different problems. Another benefit of customer surveys is that collecting outcomes can be quick. Within a few minutes, you can design and load a customer feedback survey and send it to your customers for their response. From the customer survey data, you can find your strengths, weaknesses, and get the right way to improve them to gain more customers.

  1. Run Contests
    A contest is a wonderful way to increase awareness about a product or service. Contest not just helps you to enhance the volume of user-generated content submissions, but they also help increase their quality. However, when you create a contest, it is important to keep things as simple as possible.

Additionally, it is the best way to convert your brand leads to valuable customers. The key to running a successful contest is to make sure that the reward is fair enough to motivate your participation. If the product is relevant to your participant, then chances are they were looking for it in the first place, and giving it to them for free just made you move forward ahead of your competitors. They will most likely purchase more if your product or service satisfies them.

Furthermore, running contests also improve the customer-brand relationship and allows more people to participate in it. It will drive a real result for your online business. If your WordPress website has Google Analytics, then track contest page visits, referral traffic, other website traffic, and many more.

  1. Review And Testimonials
    Customer reviews are a popular user-generated content strategy. One research found that around 68% of customers must see at least four reviews before trusting a brand. And, approximately 40 percent of consumers will stop using a business after they read negative reviews.

The business reviews help your consumers to make a buying decision without any hurdle. While you may decide to remove all the negative reviews about your business, those are still valuable user-generated content that provides honest opinions from real users. Customer feedback can help you with what needs to be improved with your products or services. This thing is not only beneficial to the next customer but your business as a whole.

This is image title

Reviews are powerful as the platform they are built upon. That is the reason it is important to gather reviews from third-party review websites like Google review, Facebook review, and many more, or direct reviews on a website. It is the most vital form of feedback that can help brands grow globally and motivate audience interactions.

However, you can also invite your customers to share their unique or successful testimonials. It is a great way to display your products while inspiring others to purchase from your website.

  1. Video Content
    A great video is a video that is enjoyed by visitors. These different types of videos, such as 360-degree product videos, product demo videos, animated videos, and corporate videos. The Facebook study has demonstrated that users spend 3x more time watching live videos than normal videos. With the live video, you can get more user-created content.

Moreover, Instagram videos create around 3x more comments rather than Instagram photo posts. Instagram videos generally include short videos posted by real customers on Instagram with the tag of a particular brand. Brands can repost the stories as user-generated content to engage more audiences and create valid promotions on social media.

Similarly, imagine you are browsing a YouTube channel, and you look at a brand being supported by some authentic customers through a small video. So, it will catch your attention. With the videos, they can tell you about the branded products, especially the unboxing videos displaying all the inside products and how well it works for them. That type of video is enough to create a sense of desire in the consumers.

Continue Reading

#how to get more user generated content #importance of user generated content #user generated content #user generated content advantages #user generated content best practices #user generated content pros and cons

5️ Techniques to Lazy-Load Website Content for Better 📈 SEO & User Experience

Website speed is a crucial aspect of on page SEO everyone can control. Your goal is to be interactive in under 3 seconds, even on a basic phone over a 3G connection.

However, most web sites have so many requests and large payloads this time limit or budget cannot be achieved. In fact, the average web page takes 22 seconds to load, according to Google’s research.

But what if I told you there is a way to offload or even avoid loading page assets until they are needed?

This can give your website a distinct advantage over your competition because not only will Google like your pages better so will your visitors!

The good news is it takes a little bit of JavaScript and intentional effort to update your site.

And Google’s Search team is all for this technique, called ‘lazy-loading’.

Google’s short guide mentions three primary points:

  • Load Visible Content
  • How to Support Infinite Scrolling and Pagination
  • How to Test Your Implementation

The trick is not hiding content you need indexed from Google. So, they published this helpful, but thin guide.

They have also published a guide on using Intersection Observer to lazy load images and videos until they are in view.

For average websites that are sharing content, like a blog or marketing site, lazy loading is rather simple. But for web applications this can be more complex, and I will dive into some techniques I use to load code as needed to make sure my applications are fast and responsive.

I will review the points Google offers and provide some advice and examples from my own experience using IntersectionObserver and a little on how the History API works.

The lazy loading guidelines dove tail with their recent guidance around single page apps and SEO because it is a good technique to improve your website’s user experience.

For more complex web apps I will touch on a technique to load scripts on demand, rather than up front, which keeps the page from rendering.

Lazy-Load Content Scenarios

Before you add lazy-loading capabilities to your site you should inventory what assets your pages load and determine what is required for the initial experience and what can be loaded once the user starts reading or interacting with the content.

Let me go back to my two primary website scenarios, content and application because the content needs vary, but have overlap.

First, a content or sales focused website. For simplicity I will just call it a blog. These sites typically need HTML, CSS, JavaScript, images and custom fonts. They also tend to be sites you want free organic traffic from Google, which means SEO is important.

Other than making the page load faster you need to make sure the content is accessible to the search engine spider, which won’t execute your JavaScript or scroll the page to drive lazy-loaded content.

Content can refer to media, code or copy (text and markup).

#user experience

Rahim Makhani

Rahim Makhani

1618544971

Maintain your website for a better user experience

If you are owning a firm and having a website for your own company then you need to maintain your website for a better user experience. As you know, how much a website is important for your company.

Website maintenance services is a task required to maintain your website and function properly and up to date. It involves regular updating and checking your website if there is any issue or bug it will be solved.

If you want to maintain your website and also need website maintenance services then you can contact our company that is Nevina Infotech. Our developers will help you to maintain your website through which your user experience can increase.

#website maintenance services #website support and maintenance #website maintenance support #website maintenance packages #website maintenance company #website maintenance plans

Hertha  Mayer

Hertha Mayer

1594217340

Bypassing Server-Side Rendering Altogether For a Better Web User Experience

I almost hate to write this article, but at the same time I have experienced a special freedom lately. We build high performance single page applications, this means our web sites render data in the browser rather than the server. Over the past year we were challenged by customers to live without ASP.NET, node and other server-side rendering services. After several experiences and some changes to our workflow, we found don’t need them and neither do you.

classic-web-site-server

Why No Server-Side Rendering?

You may have read articles and heard statements to the effect server-side rendering is faster than client-side. I have disputed this theory and even done some simple tests to compare the two techniques. Part of the argument is that browsers and DOM are not fast. This is untrue. They are not fast when you do things wrong, which most fast food frameworksdo. They are designed using server-side architecture and techniques, not good client-side browser techniques.

Server-side rendering is performed by an application engine like ASP.NET, Ruby, PHP, Java, node, etc. Often this involves making a call to some sort of data store and possibly evaluating authentication credentials. This can and often leads to a slower time to first byte (TTFB).

Good server-side rendering engines offer a caching mechanism like ASP.NET’s Output Caching. Here the server renders content once and caches the result in memory. Output caching allows you to declare caching parameters so you can designate the cache time to live (TTL), queryString, language and other variations. It is a powerful technique.

In essence Output Caching produces a static HTML file. Because data becomes stale over time this cache needs to be purged and rendered again. All data has a life cycle or personality as I call it that determines the right way to cache the rendered markup.

Modern single page web applications by their nature move this server-side exercise to the client. This means a SPA is responsible for managing markup and rendering as needed. They also manage data caching. These responsibilities are managed by Love2Spa, our web platform, as fundamental design features.

This does not mean there is not a dance between the web server and the client that needs to be managed. Single page applications require a different approach where the server offers more of a dumb or static server architecture. The application instead relies on a robust API to provide on-demand data, preferably in JSON format. A single page application needs a fast, static web server that beckons back to the web’s early days. The API provides the dynamic aspects of the application, data and authorized content.

This is good because static CDNs like Azure and AWS’ S3 and Cloudfront are cheap, globally distributed and fast. APIs can be built and hosted on the same cloud platform, using services like Azure App Services, Blob Storage, AWS Beanstalk or S3. Again API platforms are cheap and highly scalable. While many management tasks need to be managed in this scenario, they replace many tasks previously assigned to the web server. In my experience there is even less administration required.

So how does the modern single page application architecture look? Let’s look at two diagrams, one with a single web server and another using the distributed cloud based services previously described.

#user experience #bypassing server-side rendering altogether #better web user #experiences