Jaimin Bhavsar

Jaimin Bhavsar

1574217159

An overview of 6 Patterns for Microfrontends

Microfrontends are not a new thing, but certainly a recent trend. Coined in 2016, the pattern slowly gained popularity as problems started to appear when developing large scale web apps.

In this article, we’ll go over the different patterns for creating microfrontends, their advantages and drawbacks, as well as implementation details and examples for each of the presented methods. I will also argue that microfrontends come with some inherited problems that may be solved by going even a step further — into a region that could be either called Modulith or Siteless UI, depending on the point of view.

But let’s go step by step. We start our journey with a historic background.

Background

When the web (i.e., HTTP as transport and HTML as representation) started there was no notion of “design” or “layout”. Instead, text documents have been exchanged. The introduction of the <img> tag changed that all. Together with <table> designers could declare war on good taste. Nevertheless, one problem arises quite quickly: How is it possible to share a common layout across multiple sites? For this purpose two solutions have been proposed:

  1. Use a program to dynamically generate the HTML (slowish, but okay — especially with the powerful capabilities behind the CGI standard)
  2. Use mechanisms already integrated into the web server to replace common parts with other parts

While the former lead to C and Perl web servers, which became PHP and Java then converted to C# and Ruby, before finally emerging at Elixir and Node.js, the latter wasn’t really after 2002. The web 2.0 also demanded more sophisticated tools, which is why server-side rendering using full-blown applications dominated for quite while.

Until Netflix came and told everyone to make smaller services to make cloud vendors rich. Ironically, while Netflix would be ready for their own data centers they are still massively coupled to cloud vendors such as AWS, which also hosts most of the competition including Amazon Prime Video.

The Patterns

In the following, we’ll look at some of the patterns that are possible for actually realizing a microfrontend architecture. We’ll see that “it depends” is in fact the right answer when somebody asks: “what is the right way to implement microfrontends?”. It very much depends on what we are after.

Each section contains a bit of example code and a very simple snippet (sometimes using a framework) to realize that pattern for a proof of concept or even an MVP. In the end, I try to provide a small summary to indicate the target audience according to my personal feelings.

Regardless of the pattern you choose, when integrating separate projects, keeping a consistent UI is always a challenge. Use tools like Bit (Github) to share and collaborate on UI components across microservices.
Patterns for Microfrontends

1. The Web Approach

The most simple method of implementing microfrontends is to deploy a set of small websites (ideally just a single page), which are just linked together. The user goes from website to website by using the links leading to the different servers providing the content.

To keep the layout consistent a pattern library may be used on the server. Each team can implement the server-side rendering as they desire. The pattern library must also be usable on different platforms.

Patterns for Microfrontends

Using the web approach can be as simple as deploying static sites to a server. This could be done with a Docker image as follows:

FROM nginx:stable
COPY ./dist/ /var/www
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
CMD ["nginx -g 'daemon off;'"]

Obviously, we are not restricted to use a static site. We can apply server-side rendering, too. Changing the nginx base image to, e.g., ASP.NET Core allows us to use ASP.NET Core for generating the page. But how is this different to the frontend monolith? In this scenario, we would, for example, take a given microservice, exposed via a web API (i.e., returning something like JSON) and change it to return rendered HTML instead.

Logically, microfrontends in this world are nothing more than a different way of representing our API. Instead of returning “naked” data we generate the view already.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Completely isolated
  • Pro: Most flexible approach
  • Pro: Least complicated approach
  • Con: Infrastructure overhead
  • Con: Inconsistent user experience
  • Con: Internal URLs exposed to outside

2. Server-Side Composition

This is the true microfrontend approach. Why? As we have seen, microfrontends have been supposed to be run server-side. As such the whole approach works independently for sure. When we have a dedicated server for each small frontend snippet we may really call this micro frontend.

In a diagrammatic form, we may end up with a sketch like the one below.
Patterns for Microfrontends

The complexity of this solution lies totally in the reverse proxy layer. How the different smaller sites are combined into one site can be tricky. Especially things like caching rules, tracking, and other tricky bits will bite us at night.

In some sense, this adds a kind of gateway layer to the first approach. The reverse proxy combines different sources into a single delivery. While the tricky bits certainly need to (and can) be solved somehow.

http {
  server {
    listen 80;
    server_name www.example.com;
    location /api/ {
      proxy_pass http://api-svc:8000/api;
    }
    location /web/admin {
      proxy_pass http://admin-svc:8080/web/admin;
    }
    location /web/notifications {
      proxy_pass http://public-svc:8080/web/notifications;
    }
    location / {
      proxy_pass /;
    }
  }
}

A little bit more powerful would be something like using Varnish Reverse Proxy.

In addition, we find that this is also a perfect use case for ESI (abbr. Edge-Side Includes) — which is the (much more flexible) successor to the historic Server-Side Includes (SSI).

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>Demo</title>
  </head>
  <body>
    <esi:include src="http://header-service.example.com/" />
    <esi:include src="http://checkout-service.example.com/" />
    <esi:include
      src="http://navigator-service.example.com/"
      alt="http://backup-service.example.com/"
    />
    <esi:include src="http://footer-service.example.com/" />
  </body>
</html>

A similar setup can be seen with the Tailor backend service, which is a part of Project Mosaic.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Completely isolated
  • Pro: Looks embedded to user
  • Pro: Very flexible approach
  • Con: Enforces coupling between the components
  • Con: Infrastructure complexity
  • Con: Inconsistent user experience

3. Client-Side Composition

At this point one may be wondering: Do we need the reverse proxy? As this is a backend component we may want to avoid this altogether. The solution is client-side composition. In the simplest form this can be implemented with the use of <iframe> elements. Communication between the different parts is done via the postMessage method.
Patterns for Microfrontends

Note: The JavaScript part may be replaced with “browser” in case of an <iframe>. In this case the potential interactivity is certainly different.

As already suggested by the name, this pattern tries to avoid the infrastructure overhead coming with a reverse proxy. Instead, since microfrontends contain already the term “frontend” the whole rendering is left to the client. The advantage is that starting with this pattern serverless may be possible. In the end the whole UI could be uploaded to, e.g., a GitHub pages repository and everything just works.

As outlined, the composition can be done with quite simple methods, e.g., just an <iframe>. One of the major pain points, however, is how such integrations will look to the end user. The duplication in terms of resource needs is also quite substantial. A mix with pattern 1 is definitely possible, where the different parts are being placed on independently operated web servers.

Nevertheless, in this pattern knowledge is required again — so component 1 already knows that component 2 exists and needs to be used. Potentially, it even needs to know how to use it.

Considering the following parent (i.e., delivered application or website):

https://gist.github.com/dddbf80773ed8df03fcf20679f30c835

We can write a page that enables the direct communication path:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <title>Microfrontend</title>
  </head>
  <body>
    <h1>Child</h1>
    <p><button id="message_button">Send message to parent</button></p>
    <div id="results"></div>

    <script>
      const results = document.querySelector("#results");
      const messageButton = document.querySelector("#message_button");
      function sendMessage(msg) {
        window.parent.postMessage(msg, "*");
      }
      window.addEventListener("message", function(e) {
        results.innerHTML = e.data;
      });
      messageButton.addEventListener("click", function(e) {
        sendMessage(Math.random().toString());
      });
    </script>
  </body>
</html>

If we do not consider frames an option we could also go for web components. Here, communication could be done via the DOM by using custom events. However, already at this point in time it may make sense to consider client-side rendering instead of client-side composition; as rendering implies the need for a JavaScript client (which aligns with the web component approach).

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Completely isolated
  • Pro: Looks embedded to user
  • Pro: Serverless possible
  • Con: Enforces coupling between the components
  • Con: Inconsistent user experience
  • Con: May require JavaScript / no seamless integration

4. Client-Side Rendering

While client-side composition may work without JavaScript (e.g., only using frames that do not rely on communication with the parent or each other), client-side rendering will fail without JavaScript. In this space we already start creating a framework in the composing application. This framework has to be respected by all microfrontends brought in. At least they need to use it for being mounted properly.

The pattern looks as follows.
Patterns for Microfrontends

Quite close to the client-side composition, right? In this case the JavaScript part may not be replaced. The important difference is that server-side rendering is in general off the table. Instead, pieces of data are exchanged, which are then transformed into a view.

Depending on the designed or used framework the pieces of data may determine the location, point in time, and interactivity of the rendered fragment. Achieving a high degree of interactivity is no problem with this pattern.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Enforces separation of concerns
  • Pro: Provides loose coupling of the components
  • Pro: Looks embedded to the user
  • Con: Requires more logic in the client
  • Con: Inconsistent user experience
  • Con: Requires JavaScript

5. SPA Composition

Why should we stop at client-side rendering using a single technology? Why not just obtain a JavaScript file and run it besides all the other JavaScript files? The benefit of this is the potential use of multiple technologies side-by-side.
Patterns for Microfrontends

It is up for debate if running multiple technologies (independent if its in the backend or the frontend — granted, in the backend it may be more “acceptable”) is a good thing or something to avoid, however, there are scenarios where multiple technologies need to work together.

From the top of my head:

  • Migration scenarios
  • Support of a specific third-party technology
  • Political issues
  • Team constraints

Either way, the emerged pattern could be drawn as below.

So what is going on here? In this case delivering just some JavaScript with the app shell is no longer optional — instead, we need to deliver a framework that is capable of orchestrating the microfrontends.

The orchestration of the different modules boils down to the management of a lifecycle: mounting, running, unmounting. The different modules can be taken from independently running servers, however, their location must be already known in the application shell.

Implementing such a framework requires at least some configuration, e.g., a map of the scripts to include:

const scripts = [
  'https://example.com/script1.js',
  'https://example.com/script2.js',
];
const registrations = {};

function activityCheck(name) {
  const current = location.hash;
  const registration = registrations[name];

  if (registration) {
    if (registration.activity(current) !== registration.active) {
      if (registration.active) {
        registration.lifecycle.unmount();
      } else {
        registration.lifecycle.mount();
      }

      registration.active = !registration.active;
    }
  }
}

window.addEventListener('hashchange', function () {
  Object.keys(registrations).forEach(activityCheck);
});

window.registerApp = function(name, activity, lifecycle) {
  registrations[name] = {
    activity,
    lifecycle,
    active: false,
  };
  activityCheck(name);
}

scripts.forEach(src => {
  const script = document.createElement('script');
  script.src = src;
  document.body.appendChild(script);
});

The lifecycle management may be more complicated than the script above. Thus a module for such a composition needs to come with some structure applied — at least an exported mount and unmount function.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Enforces separation of concerns
  • Pro: Gives developers much freedom
  • Pro: Looks embedded to the user
  • Con: Enforces duplication and overhead
  • Con: Inconsistent user experience
  • Con: Requires JavaScript

6. Siteless UIs

This topic deserves its own article, but since we are listing all the patterns I don’t want to omit it here. Taking the approach of SPA composition all we miss is a decoupling (or independent centralization) of the script sources from the services, as well as a shared runtime.

Both things are done for a reason:

  • The decoupling makes sure that UI and service responsibilities are not mixed up; this also enables serverless computing
  • The shared runtime is the cure for the resource intense composition given by the previous pattern

Both things combined yield benefits to the frontend as “serverless functions” did for the backend. They also come with similar challenges:

  • The runtime cannot be just updated — it must stay / remain consistent with the modules
  • Debugging or running the modules locally require an emulator of the runtime
  • Not all technologies may be supported equally

The diagram for siteless UIs looks as follows.
Patterns for Microfrontends

The main advantage of this design is that sharing of useful or common resources is supported. Sharing a pattern library makes a lot of sense.

All in all the architecture diagram looks quite similar to the SPA composition mentioned earlier. However, the feed service and the coupling to a runtime bring additional benefits (and challenges to be solved by any framework in that space).

The big advantage is that once these challenges are cracked the development experience is supposed to be excellent. The user experience can be fully customized, treating the modules as flexible opt-in pieces of functionality. A clear separation between feature (the respective implementation) and permission (the right to access the feature) is thus possible.

One of the easiest implementations of this pattern is the following:

// app-shell/main.js
window.app = {
  registerPage(url, cb) {}
  // ...
};

showLoading();

fetch("https://feed.piral.io/api/v1/pilet/sample")
  .then(res => res.json())
  .then(body =>
    Promise.all(
      body.items.map(
        item =>
          new Promise(resolve => {
            const script = document.createElement("script");
            script.src = item.link;
            script.onload = resolve;
            document.body.appendChild(script);
          })
      )
    )
  )
  .catch(err => console.error(err))
  .then(() => hideLoading());

// module/index.jsx
import * as React from "react";
import { render } from "react-dom";
import { Page } from "./Page";

if (window.app !== undefined) {
  window.app.registerPage("/sample", element => {
    render(<Page />, element);
  });
}

This uses a global variable to share the API from the app shell. However, we already see several challenges using this approach:

  • What if one module crashes?
  • How to share dependencies (to avoid bundling them with each module, as shown in the simple implementation)?
  • How to get proper typing?
  • How is this being debugged?
  • How is proper routing done?

Implementing all of these features is a topic of its own. Regarding the debugging we should follow the same approach as all the serverless frameworks (e.g., AWS Lambda, Azure Functions) do. We should just ship an emulator that behaves like the real thing later on; except that it is running locally and works offline.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Enforces separation of concerns
  • Pro: Supports sharing of resources to avoid overhead
  • Pro: Consistent and embedded user experience
  • Con: Strict dependency management for shared resources necessary
  • Con: Requires another piece of (potentially managed) infrastructure
  • Con: Requires JavaScript

A Framework for Microfrontends

Finally, we should have a look at how one of the provided frameworks can be used to implement microfrontends. We go for Piral as this is the one I’m most familiar with.

In the following we approach the problem from two sides. First, we start with a module (i.e., microfrontend) in this context. Then we’ll walk over creating an app shell.

For the module we use my Mario5 toy project. This is a project that started several years ago with a JavaScript implementation of Super Mario called “Mario5”. It was followed up by a TypeScript tutorial / rewrite named “Mario5TS”, which has been kept up-to-date since then.

For the app shell we utilize the sample Piral instance. This one shows all concepts in one sweep. It’s also always kept up-to-date.

Let’s start with a module, which in the Piral framework is called a pilet. At its core, a pilet has a JavaScript root module which is usually located in src/index.tsx.

A Pilet

Starting with an empty pilet gives us the following root module:

import { PiletApi } from "sample-piral";

export function setup(app: PiletApi) {}

We need to export a specially named function called setup. This one will be used later to integrate the specific parts of our application.

Using React we could for instance register a menu item or a tile to be always displayed:

import "./Styles/tile.scss";
import * as React from "react";
import { Link } from "react-router-dom";
import { PiletApi } from "sample-piral";

export function setup(app: PiletApi) {
  app.registerMenu(() => <Link to="/mario5">Mario 5</Link>);

  app.registerTile(
    () => (
      <Link to="/mario5" className="mario-tile">
        Mario5
      </Link>
    ),
    {
      initialColumns: 2,
      initialRows: 2
    }
  );
}

Since our tile requires some styling we also add a stylesheet to our pilet. Great, so far so good. All the directly included resources will be always available on the app shell.

Now its time to also integrate the game itself. We decide to put it in a dedicated page, even though a modal dialog may also be cool. All the code sits in mario.ts and works against the standard DOM — no React yet.

As React supports also manipulation of hosted nodes we use a reference hook to attach the game.

import "./Styles/tile.scss";
import * as React from "react";
import { Link } from "react-router-dom";
import { PiletApi } from "sample-piral";
import { appendMarioTo } from "./mario";

export function setup(app: PiletApi) {
  app.registerMenu(() => <Link to="/mario5">Mario 5</Link>);

  app.registerTile(
    () => (
      <Link to="/mario5" className="mario-tile">
        Mario5
      </Link>
    ),
    {
      initialColumns: 2,
      initialRows: 2
    }
  );

  app.registerPage("/mario5", () => {
    const host = React.useRef();
    React.useEffect(() => {
      const gamePromise = appendMarioTo(host.current, {
        sound: true
      });
      gamePromise.then(game => game.start());
      return () => gamePromise.then(game => game.pause());
    });
    return <div ref={host} />;
  });
}

Theoretically, we could also add further functionality such as resuming the game or lazy loading the side-bundle containing the game. Right now only sounds are lazy loaded through calls to the import() function.

Starting the pilet will be done via

https://gist.github.com/0678ac99f4653843159cf8e54ad06422

which uses the Piral CLI under the hood. The Piral CLI is always installed locally, but could be also installed globally to get commands such as pilet debug directly available in the command line.

Building the pilet can be also done with the local installation.

npm run build-pilet

The App Shell

Now its time to create an app shell. Usually, we would already have an app shell (e.g., the previous pilet was created already for the sample app shell), but in my opinion its more important to see how development of a module goes.

Creating an app shell with Piral is as simple as installing piral. To make it even more simple the Piral CLI also supports scaffolding of a new app shell.

Either way we will most likely end up with something like this:

import * as React from "react";
import { render } from "react-dom";
import { createInstance, Piral, Dashboard } from "piral";
import { Layout, Loader } from "./layout";

const instance = createInstance({
  requestPilets() {
    return fetch("https://feed.piral.io/api/v1/pilet/sample")
      .then(res => res.json())
      .then(res => res.items);
  }
});

const app = (
  <Piral instance={instance}>
    <SetComponent name="LoadingIndicator" component={Loader} />
    <SetComponent name="Layout" component={Layout} />
    <SetRoute path="/" component={Dashboard} />
  </Piral>
);

render(app, document.querySelector("#app"));

Here we do three things:

  1. We set up all the imports and libraries
  2. We create the Piral instance; supplying all functional options (most importantly declaring where the pilets come from)
  3. We render the app shell using components and a custom layout that we defined

The actual rendering is done from React.

Building the app shell is straight forward — in the end its a standard bundler (Parcel) to process the whole application. The output is a folder containing all files to be placed on a webserver or static storage.

Points of Interest

Coining the term “Siteless UI” potentially requires a bit of explanation. I’ll try to start with the name first: As seen, it is a direct reference to “Serverless Computing”. While serverless may also be a good term for the used technology, it may also be misleading and wrong. UIs can generally be deployed on serverless infrastructures (e.g., Amazon S3, Azure Blob Storage, Dropbox). This is one of the benefits of “rendering the UI on the client” instead of doing server-side rendering.

However, I wanted to follow the approach “having an UI that cannot live without a host” kind a thing. Same as with serverless functions, which require a runtime sitting somewhere and could not start otherwise.

Let’s compare the similarities. First, let’s start of with the conjunction that microfrontends should be to frontend UIs what microservices have been to backend services. In this case we should have:

  • can start standalone
  • provide independent URLs
  • have independent lifetime (startup, recycle, shutdown)
  • do independent deployments
  • define an independent user / state management
  • run as a dedicated webserver somewhere (e.g., as a docker image)
  • if combined, we use a gateway-like structure

Great, certainly some things that apply here and there, however, note that some of this contradicts with SPA composition and, as a result, with siteless UIs, too.

Now let’s compare this to an analogy that we can do if the conjecture changes to: siteless UIs should be to frontend UIs what serverless functions have been to backend services. In this case we have:

  • require a runtime to start
  • provide URLs within the environment given by the runtime
  • are coupled to the lifetime defined by the runtime
  • do independent deployments
  • defined partially independent, partially shared (but controlled / isolated) user / state management
  • run on a non-operated infrastructure somewhere else (e.g., on the client)

If you agree that this reads like a perfect analogy — awesome! If not, please provide your points in the comments blow. I’ll still try to see where this is going and if this seems like a great idea to follow. Much appreciated!

#Microfrontend #cloud

What is GEEK

Buddha Community

An overview of 6 Patterns for Microfrontends
渚  直樹

渚 直樹

1635917640

ループを使用して、Rustのデータを反復処理します

このモジュールでは、Rustでハッシュマップ複合データ型を操作する方法について説明します。ハッシュマップのようなコレクション内のデータを反復処理するループ式を実装する方法を学びます。演習として、要求された注文をループし、条件をテストし、さまざまなタイプのデータを処理することによって車を作成するRustプログラムを作成します。

さび遊び場

錆遊び場は錆コンパイラにブラウザインタフェースです。言語をローカルにインストールする前、またはコンパイラが利用できない場合は、Playgroundを使用してRustコードの記述を試すことができます。このコース全体を通して、サンプルコードと演習へのPlaygroundリンクを提供します。現時点でRustツールチェーンを使用できない場合でも、コードを操作できます。

Rust Playgroundで実行されるすべてのコードは、ローカルの開発環境でコンパイルして実行することもできます。コンピューターからRustコンパイラーと対話することを躊躇しないでください。Rust Playgroundの詳細については、What isRust?をご覧ください。モジュール。

学習目標

このモジュールでは、次のことを行います。

  • Rustのハッシュマップデータ型、およびキーと値にアクセスする方法を確認してください
  • ループ式を使用してRustプログラムのデータを反復処理する方法を探る
  • Rustプログラムを作成、コンパイル、実行して、ループを使用してハッシュマップデータを反復処理します

Rustのもう1つの一般的なコレクションの種類は、ハッシュマップです。このHashMap<K, V>型は、各キーKをその値にマッピングすることによってデータを格納しますV。ベクトル内のデータは整数インデックスを使用してアクセスされますが、ハッシュマップ内のデータはキーを使用してアクセスされます。

ハッシュマップタイプは、オブジェクト、ハッシュテーブル、辞書などのデータ項目の多くのプログラミング言語で使用されます。

ベクトルのように、ハッシュマップは拡張可能です。データはヒープに格納され、ハッシュマップアイテムへのアクセスは実行時にチェックされます。

ハッシュマップを定義する

次の例では、書評を追跡するためのハッシュマップを定義しています。ハッシュマップキーは本の名前であり、値は読者のレビューです。

use std::collections::HashMap;
let mut reviews: HashMap<String, String> = HashMap::new();

reviews.insert(String::from("Ancient Roman History"), String::from("Very accurate."));
reviews.insert(String::from("Cooking with Rhubarb"), String::from("Sweet recipes."));
reviews.insert(String::from("Programming in Rust"), String::from("Great examples."));

このコードをさらに詳しく調べてみましょう。最初の行に、新しいタイプの構文が表示されます。

use std::collections::HashMap;

このuseコマンドは、Rust標準ライブラリの一部HashMapからの定義をcollectionsプログラムのスコープに取り込みます。この構文は、他のプログラミング言語がインポートと呼ぶものと似ています。

HashMap::newメソッドを使用して空のハッシュマップを作成します。reviews必要に応じてキーと値を追加または削除できるように、変数を可変として宣言します。この例では、ハッシュマップのキーと値の両方がStringタイプを使用しています。

let mut reviews: HashMap<String, String> = HashMap::new();

キーと値のペアを追加します

このinsert(<key>, <value>)メソッドを使用して、ハッシュマップに要素を追加します。コードでは、構文は<hash_map_name>.insert()次のとおりです。

reviews.insert(String::from("Ancient Roman History"), String::from("Very accurate."));

キー値を取得する

ハッシュマップにデータを追加した後、get(<key>)メソッドを使用してキーの特定の値を取得できます。

// Look for a specific review
let book: &str = "Programming in Rust";
println!("\nReview for \'{}\': {:?}", book, reviews.get(book));

出力は次のとおりです。

Review for 'Programming in Rust': Some("Great examples.")

ノート

出力には、書評が単なる「すばらしい例」ではなく「Some( "すばらしい例。")」として表示されていることに注意してください。getメソッドはOption<&Value>型を返すため、Rustはメソッド呼び出しの結果を「Some()」表記でラップします。

キーと値のペアを削除します

この.remove()メソッドを使用して、ハッシュマップからエントリを削除できます。get無効なハッシュマップキーに対してメソッドを使用すると、getメソッドは「なし」を返します。

// Remove book review
let obsolete: &str = "Ancient Roman History";
println!("\n'{}\' removed.", obsolete);
reviews.remove(obsolete);

// Confirm book review removed
println!("\nReview for \'{}\': {:?}", obsolete, reviews.get(obsolete));

出力は次のとおりです。

'Ancient Roman History' removed.
Review for 'Ancient Roman History': None

このコードを試して、このRustPlaygroundでハッシュマップを操作できます。

演習:ハッシュマップを使用して注文を追跡する
この演習では、ハッシュマップを使用するように自動車工場のプログラムを変更します。

ハッシュマップキーと値のペアを使用して、車の注文に関する詳細を追跡し、出力を表示します。繰り返しになりますが、あなたの課題は、サンプルコードを完成させてコンパイルして実行することです。

この演習のサンプルコードで作業するには、次の2つのオプションがあります。

  • コードをコピーして、ローカル開発環境で編集します。
  • 準備されたRustPlaygroundでコードを開きます。

ノート

サンプルコードで、todo!マクロを探します。このマクロは、完了するか更新する必要があるコードを示します。

現在のプログラムをロードする

最初のステップは、既存のプログラムコードを取得することです。

  1. 編集のために既存のプログラムコードを開きます。コードは、データ型宣言、および定義のため含みcar_qualitycar_factoryおよびmain機能を。

次のコードをコピーしてローカル開発環境で編集する
か、この準備されたRustPlaygroundでコードを開きます。

#[derive(PartialEq, Debug)]
struct Car { color: String, motor: Transmission, roof: bool, age: (Age, u32) }

#[derive(PartialEq, Debug)]
enum Transmission { Manual, SemiAuto, Automatic }

#[derive(PartialEq, Debug)]
enum Age { New, Used }

// Get the car quality by testing the value of the input argument
// - miles (u32)
// Return tuple with car age ("New" or "Used") and mileage
fn car_quality (miles: u32) -> (Age, u32) {

    // Check if car has accumulated miles
    // Return tuple early for Used car
    if miles > 0 {
        return (Age::Used, miles);
    }

    // Return tuple for New car, no need for "return" keyword or semicolon
    (Age::New, miles)
}

// Build "Car" using input arguments
fn car_factory(order: i32, miles: u32) -> Car {
    let colors = ["Blue", "Green", "Red", "Silver"];

    // Prevent panic: Check color index for colors array, reset as needed
    // Valid color = 1, 2, 3, or 4
    // If color > 4, reduce color to valid index
    let mut color = order as usize;
    if color > 4 {        
        // color = 5 --> index 1, 6 --> 2, 7 --> 3, 8 --> 4
        color = color - 4;
    }

    // Add variety to orders for motor type and roof type
    let mut motor = Transmission::Manual;
    let mut roof = true;
    if order % 3 == 0 {          // 3, 6, 9
        motor = Transmission::Automatic;
    } else if order % 2 == 0 {   // 2, 4, 8, 10
        motor = Transmission::SemiAuto;
        roof = false;
    }                            // 1, 5, 7, 11

    // Return requested "Car"
    Car {
        color: String::from(colors[(color-1) as usize]),
        motor: motor,
        roof: roof,
        age: car_quality(miles)
    }
}

fn main() {
    // Initialize counter variable
    let mut order = 1;
    // Declare a car as mutable "Car" struct
    let mut car: Car;

    // Order 6 cars, increment "order" for each request
    // Car order #1: Used, Hard top
    car = car_factory(order, 1000);
    println!("{}: {:?}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

    // Car order #2: Used, Convertible
    order = order + 1;
    car = car_factory(order, 2000);
    println!("{}: {:?}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);    

    // Car order #3: New, Hard top
    order = order + 1;
    car = car_factory(order, 0);
    println!("{}: {:?}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

    // Car order #4: New, Convertible
    order = order + 1;
    car = car_factory(order, 0);
    println!("{}: {:?}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

    // Car order #5: Used, Hard top
    order = order + 1;
    car = car_factory(order, 3000);
    println!("{}: {:?}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

    // Car order #6: Used, Hard top
    order = order + 1;
    car = car_factory(order, 4000);
    println!("{}: {:?}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);
}

2. プログラムをビルドします。次のセクションに進む前に、コードがコンパイルされて実行されることを確認してください。

次の出力が表示されます。

1: Used, Hard top = true, Manual, Blue, 1000 miles
2: Used, Hard top = false, SemiAuto, Green, 2000 miles
3: New, Hard top = true, Automatic, Red, 0 miles
4: New, Hard top = false, SemiAuto, Silver, 0 miles
5: Used, Hard top = true, Manual, Blue, 3000 miles
6: Used, Hard top = true, Automatic, Green, 4000 miles

注文の詳細を追跡するためのハッシュマップを追加する

現在のプログラムは、各車の注文を処理し、各注文が完了した後に要約を印刷します。car_factory関数を呼び出すたびにCar、注文の詳細を含む構造体が返され、注文が実行されます。結果はcar変数に格納されます。

お気づきかもしれませんが、このプログラムにはいくつかの重要な機能がありません。すべての注文を追跡しているわけではありません。car変数は、現在の注文の詳細のみを保持しています。関数carの結果で変数が更新されるたびcar_factoryに、前の順序の詳細が上書きされます。

ファイリングシステムのようにすべての注文を追跡するために、プログラムを更新する必要があります。この目的のために、<K、V>ペアでハッシュマップを定義します。ハッシュマップキーは、車の注文番号に対応します。ハッシュマップ値は、Car構造体で定義されているそれぞれの注文の詳細になります。

  1. ハッシュマップを定義するには、main関数の先頭、最初の中括弧の直後に次のコードを追加します{
// Initialize a hash map for the car orders
    // - Key: Car order number, i32
    // - Value: Car order details, Car struct
    use std::collections::HashMap;
    let mut orders: HashMap<i32, Car> = HashMap;

2. ordersハッシュマップを作成するステートメントの構文の問題を修正します。

ヒント

ハッシュマップを最初から作成しているので、おそらくこのnew()メソッドを使用することをお勧めします。

3. プログラムをビルドします。次のセクションに進む前に、コードがコンパイルされていることを確認してください。コンパイラからの警告メッセージは無視してかまいません。

ハッシュマップに値を追加する

次のステップは、履行された各自動車注文をハッシュマップに追加することです。

このmain関数では、car_factory車の注文ごとに関数を呼び出します。注文が履行された後、println!マクロを呼び出して、car変数に格納されている注文の詳細を表示します。

// Car order #1: Used, Hard top
    car = car_factory(order, 1000);
    println!("{}: {}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

    ...

    // Car order #6: Used, Hard top
    order = order + 1;
    car = car_factory(order, 4000);
    println!("{}: {}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

新しいハッシュマップで機能するように、これらのコードステートメントを修正します。

  • car_factory関数の呼び出しは保持します。返された各Car構造体は、ハッシュマップの<K、V>ペアの一部として格納されます。
  • println!マクロの呼び出しを更新して、ハッシュマップに保存されている注文の詳細を表示します。
  1. main関数で、関数の呼び出しcar_factoryとそれに伴うprintln!マクロの呼び出しを見つけます。
// Car order #1: Used, Hard top
    car = car_factory(order, 1000);
    println!("{}: {}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

    ...

    // Car order #6: Used, Hard top
    order = order + 1;
    car = car_factory(order, 4000);
    println!("{}: {}, Hard top = {}, {:?}, {}, {} miles", order, car.age.0, car.roof, car.motor, car.color, car.age.1);

2. すべての自動車注文のステートメントの完全なセットを次の改訂されたコードに置き換えます。

// Car order #1: Used, Hard top
    car = car_factory(order, 1000);
    orders(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

    // Car order #2: Used, Convertible
    order = order + 1;
    car = car_factory(order, 2000);
    orders(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

    // Car order #3: New, Hard top
    order = order + 1;
    car = car_factory(order, 0);
    orders(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

    // Car order #4: New, Convertible
    order = order + 1;
    car = car_factory(order, 0);
    orders(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

    // Car order #5: Used, Hard top
    order = order + 1;
    car = car_factory(order, 3000);
    orders(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

    // Car order #6: Used, Hard top
    order = order + 1;
    car = car_factory(order, 4000);
    orders(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

3. 今すぐプログラムをビルドしようとすると、コンパイルエラーが表示されます。<K、V>ペアをordersハッシュマップに追加するステートメントに構文上の問題があります。問題がありますか?先に進んで、ハッシュマップに順序を追加する各ステートメントの問題を修正してください。

ヒント

ordersハッシュマップに直接値を割り当てることはできません。挿入を行うにはメソッドを使用する必要があります。

プログラムを実行する

プログラムが正常にビルドされると、次の出力が表示されます。

Car order 1: Some(Car { color: "Blue", motor: Manual, roof: true, age: ("Used", 1000) })
Car order 2: Some(Car { color: "Green", motor: SemiAuto, roof: false, age: ("Used", 2000) })
Car order 3: Some(Car { color: "Red", motor: Automatic, roof: true, age: ("New", 0) })
Car order 4: Some(Car { color: "Silver", motor: SemiAuto, roof: false, age: ("New", 0) })
Car order 5: Some(Car { color: "Blue", motor: Manual, roof: true, age: ("Used", 3000) })
Car order 6: Some(Car { color: "Green", motor: Automatic, roof: true, age: ("Used", 4000) })

改訂されたコードの出力が異なることに注意してください。println!マクロディスプレイの内容Car各値を示すことによって、構造体と対応するフィールド名。

次の演習では、ループ式を使用してコードの冗長性を減らします。

for、while、およびloop式を使用します


多くの場合、プログラムには、その場で繰り返す必要のあるコードのブロックがあります。ループ式を使用して、繰り返しの実行方法をプログラムに指示できます。電話帳のすべてのエントリを印刷するには、ループ式を使用して、最初のエントリから最後のエントリまで印刷する方法をプログラムに指示できます。

Rustは、プログラムにコードのブロックを繰り返させるための3つのループ式を提供します。

  • loop:手動停止が発生しない限り、繰り返します。
  • while:条件が真のままで繰り返します。
  • for:コレクション内のすべての値に対して繰り返します。

この単元では、これらの各ループ式を見ていきます。

ループし続けるだけ

loop式は、無限ループを作成します。このキーワードを使用すると、式の本文でアクションを継続的に繰り返すことができます。ループを停止させるための直接アクションを実行するまで、アクションが繰り返されます。

次の例では、「We loopforever!」というテキストを出力します。そしてそれはそれ自体で止まりません。println!アクションは繰り返し続けます。

loop {
    println!("We loop forever!");
}

loop式を使用する場合、ループを停止する唯一の方法は、プログラマーとして直接介入する場合です。特定のコードを追加してループを停止したり、Ctrl + Cなどのキーボード命令を入力してプログラムの実行を停止したりできます。

loop式を停止する最も一般的な方法は、breakキーワードを使用してブレークポイントを設定することです。

loop {
    // Keep printing, printing, printing...
    println!("We loop forever!");
    // On the other hand, maybe we should stop!
    break;                            
}

プログラムがbreakキーワードを検出すると、loop式の本体でアクションの実行を停止し、次のコードステートメントに進みます。

breakキーワードは、特別な機能を明らかにするloop表現を。breakキーワードを使用すると、式本体でのアクションの繰り返しを停止することも、ブレークポイントで値を返すこともできます。

次の例はbreakloop式でキーワードを使用して値も返す方法を示しています。

let mut counter = 1;
// stop_loop is set when loop stops
let stop_loop = loop {
    counter *= 2;
    if counter > 100 {
        // Stop loop, return counter value
        break counter;
    }
};
// Loop should break when counter = 128
println!("Break the loop at counter = {}.", stop_loop);

出力は次のとおりです。

Break the loop at counter = 128.

私たちのloop表現の本体は、これらの連続したアクションを実行します。

  1. stop_loop変数を宣言します。
  2. 変数値をloop式の結果にバインドするようにプログラムに指示します。
  3. ループを開始します。loop式の本体でアクションを実行します:
    ループ本体
    1. counter値を現在の値の2倍にインクリメントします。
    2. counter値を確認してください。
    3. もしcounter値が100以上です。

ループから抜け出し、counter値を返します。

4. もしcounter値が100以上ではありません。

ループ本体でアクションを繰り返します。

5. stop_loop値を式のcounter結果である値に設定しますloop

loop式本体は、複数のブレークポイントを持つことができます。式に複数のブレークポイントがある場合、すべてのブレークポイントは同じタイプの値を返す必要があります。すべての値は、整数型、文字列型、ブール型などである必要があります。ブレークポイントが明示的に値を返さない場合、プログラムは式の結果を空のタプルとして解釈します()

しばらくループする

whileループは、条件式を使用しています。条件式が真である限り、ループが繰り返されます。このキーワードを使用すると、条件式がfalseになるまで、式本体のアクションを実行できます。

whileループは、ブール条件式を評価することから始まります。条件式がと評価されるtrueと、本体のアクションが実行されます。アクションが完了すると、制御は条件式に戻ります。条件式がと評価されるfalseと、while式は停止します。

次の例では、「しばらくループします...」というテキストを出力します。ループを繰り返すたびに、「カウントが5未満である」という条件がテストされます。条件が真のままである間、式本体のアクションが実行されます。条件が真でなくなった後、whileループは停止し、プログラムは次のコードステートメントに進みます。

while counter < 5 {
    println!("We loop a while...");
    counter = counter + 1;
}

これらの値のループ

forループは、項目のコレクションを処理するためにイテレータを使用しています。ループは、コレクション内の各アイテムの式本体のアクションを繰り返します。このタイプのループの繰り返しは、反復と呼ばれます。すべての反復が完了すると、ループは停止します。

Rustでは、配列、ベクトル、ハッシュマップなど、任意のコレクションタイプを反復処理できます。Rustはイテレータを使用して、コレクション内の各アイテムを最初から最後まで移動します

forループはイテレータとして一時変数を使用しています。変数はループ式の開始時に暗黙的に宣言され、現在の値は反復ごとに設定されます。

次のコードでは、コレクションはbig_birds配列であり、イテレーターの名前はbirdです。

let big_birds = ["ostrich", "peacock", "stork"];
for bird in big_birds

iter()メソッドを使用して、コレクション内のアイテムにアクセスします。for式は結果にイテレータの現在の値をバインドするiter()方法。式本体では、イテレータ値を操作できます。

let big_birds = ["ostrich", "peacock", "stork"];
for bird in big_birds.iter() {
    println!("The {} is a big bird.", bird);
}

出力は次のとおりです。

The ostrich is a big bird.
The peacock is a big bird.
The stork is a big bird.

イテレータを作成するもう1つの簡単な方法は、範囲表記を使用することですa..b。イテレータはa値から始まりb、1ステップずつ続きますが、値を使用しませんb

for number in 0..5 {
    println!("{}", number * 2);
}

このコードは、0、1、2、3、および4の数値をnumber繰り返し処理します。ループの繰り返しごとに、値を変数にバインドします。

出力は次のとおりです。

0
2
4
6
8

このコードを実行して、このRustPlaygroundでループを探索できます。

演習:ループを使用してデータを反復処理する


この演習では、自動車工場のプログラムを変更して、ループを使用して自動車の注文を反復処理します。

main関数を更新して、注文の完全なセットを処理するためのループ式を追加します。ループ構造は、コードの冗長性を減らすのに役立ちます。コードを簡素化することで、注文量を簡単に増やすことができます。

このcar_factory関数では、範囲外の値での実行時のパニックを回避するために、別のループを追加します。

課題は、サンプルコードを完成させて、コンパイルして実行することです。

この演習のサンプルコードで作業するには、次の2つのオプションがあります。

  • コードをコピーして、ローカル開発環境で編集します。
  • 準備されたRustPlaygroundでコードを開きます。

ノート

サンプルコードで、todo!マクロを探します。このマクロは、完了するか更新する必要があるコードを示します。

プログラムをロードする

前回の演習でプログラムコードを閉じた場合は、この準備されたRustPlaygroundでコードを再度開くことができます。

必ずプログラムを再構築し、コンパイラエラーなしで実行されることを確認してください。

ループ式でアクションを繰り返す

より多くの注文をサポートするには、プログラムを更新する必要があります。現在のコード構造では、冗長ステートメントを使用して6つの注文をサポートしています。冗長性は扱いにくく、維持するのが困難です。

ループ式を使用してアクションを繰り返し、各注文を作成することで、構造を単純化できます。簡略化されたコードを使用すると、多数の注文をすばやく作成できます。

  1. ではmain機能、削除次の文を。このコードブロックは、order変数を定義および設定し、自動車の注文のcar_factory関数とprintln!マクロを呼び出し、各注文をordersハッシュマップに挿入します。
// Order 6 cars
    // - Increment "order" after each request
    // - Add each order <K, V> pair to "orders" hash map
    // - Call println! to show order details from the hash map

    // Initialize order variable
    let mut order = 1;

    // Car order #1: Used, Hard top
    car = car_factory(order, 1000);
    orders.insert(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

    ...

    // Car order #6: Used, Hard top
    order = order + 1;
    car = car_factory(order, 4000);
    orders.insert(order, car);
    println!("Car order {}: {:?}", order, orders.get(&order));

2. 削除されたステートメントを次のコードブロックに置き換えます。

// Start with zero miles
    let mut miles = 0;

    todo!("Add a loop expression to fulfill orders for 6 cars, initialize `order` variable to 1") {

        // Call car_factory to fulfill order
        // Add order <K, V> pair to "orders" hash map
        // Call println! to show order details from the hash map        
        car = car_factory(order, miles);
        orders.insert(order, car);
        println!("Car order {}: {:?}", order, orders.get(&order));

        // Reset miles for order variety
        if miles == 2100 {
            miles = 0;
        } else {
            miles = miles + 700;
        }
    }

3. アクションを繰り返すループ式を追加して、6台の車の注文を作成します。order1に初期化された変数が必要です。

4. プログラムをビルドします。コードがエラーなしでコンパイルされることを確認してください。

次の例のような出力が表示されます。

Car order 1: Some(Car { color: "Blue", motor: Manual, roof: true, age: ("New", 0) })
Car order 2: Some(Car { color: "Green", motor: SemiAuto, roof: false, age: ("Used", 700) })
Car order 3: Some(Car { color: "Red", motor: Automatic, roof: true, age: ("Used", 1400) })
Car order 4: Some(Car { color: "Silver", motor: SemiAuto, roof: false, age: ("Used", 2100) })
Car order 5: Some(Car { color: "Blue", motor: Manual, roof: true, age: ("New", 0) })
Car order 6: Some(Car { color: "Green", motor: Automatic, roof: true, age: ("Used", 700) })

車の注文を11に増やす

 プログラムは現在、ループを使用して6台の車の注文を処理しています。6台以上注文するとどうなりますか?

  1. main関数のループ式を更新して、11台の車を注文します。
    todo!("Update the loop expression to create 11 cars");

2. プログラムを再構築します。実行時に、プログラムはパニックになります!

Compiling playground v0.0.1 (/playground)
    Finished dev [unoptimized + debuginfo] target(s) in 1.26s
    Running `target/debug/playground`
thread 'main' panicked at 'index out of bounds: the len is 4 but the index is 4', src/main.rs:34:29

この問題を解決する方法を見てみましょう。

ループ式で実行時のパニックを防ぐ

このcar_factory関数では、if / else式を使用colorして、colors配列のインデックスの値を確認します。

// Prevent panic: Check color index for colors array, reset as needed
    // Valid color = 1, 2, 3, or 4
    // If color > 4, reduce color to valid index
    let mut color = order as usize;
    if color > 4 {        
        // color = 5 --> index 1, 6 --> 2, 7 --> 3, 8 --> 4
        color = color - 4;
    }

colors配列には4つの要素を持ち、かつ有効なcolor場合は、インデックスの範囲は0〜3の条件式をチェックしているcolor私たちはをチェックしません(インデックスが4よりも大きい場合color、その後の関数で4に等しいインデックスへのときに我々のインデックスを車の色を割り当てる配列では、インデックス値から1を減算しますcolor - 1color値4はcolors[3]、配列と同様に処理されます。)

現在のif / else式は、8台以下の車を注文するときの実行時のパニックを防ぐためにうまく機能します。しかし、11台の車を注文すると、プログラムは9番目の注文でパニックになります。より堅牢になるように式を調整する必要があります。この改善を行うために、別のループ式を使用します。

  1. ではcar_factory機能、ループ式であれば/他の条件文を交換してください。colorインデックス値が4より大きい場合に実行時のパニックを防ぐために、次の擬似コードステートメントを修正してください。
// Prevent panic: Check color index, reset as needed
    // If color = 1, 2, 3, or 4 - no change needed
    // If color > 4, reduce to color to a valid index
    let mut color = order as usize;
    todo!("Replace `if/else` condition with a loop to prevent run-time panic for color > 4");

ヒント

この場合、if / else条件からループ式への変更は実際には非常に簡単です。

2. プログラムをビルドします。コードがエラーなしでコンパイルされることを確認してください。

次の出力が表示されます。

Car order 1: Some(Car { color: "Blue", motor: Manual, roof: true, age: ("New", 0) })
Car order 2: Some(Car { color: "Green", motor: SemiAuto, roof: false, age: ("Used", 700) })
Car order 3: Some(Car { color: "Red", motor: Automatic, roof: true, age: ("Used", 1400) })
Car order 4: Some(Car { color: "Silver", motor: SemiAuto, roof: false, age: ("Used", 2100) })
Car order 5: Some(Car { color: "Blue", motor: Manual, roof: true, age: ("New", 0) })
Car order 6: Some(Car { color: "Green", motor: Automatic, roof: true, age: ("Used", 700) })
Car order 7: Some(Car { color: "Red", motor: Manual, roof: true, age: ("Used", 1400) })
Car order 8: Some(Car { color: "Silver", motor: SemiAuto, roof: false, age: ("Used", 2100) })
Car order 9: Some(Car { color: "Blue", motor: Automatic, roof: true, age: ("New", 0) })
Car order 10: Some(Car { color: "Green", motor: SemiAuto, roof: false, age: ("Used", 700) })
Car order 11: Some(Car { color: "Red", motor: Manual, roof: true, age: ("Used", 1400) })

概要

このモジュールでは、Rustで使用できるさまざまなループ式を調べ、ハッシュマップの操作方法を発見しました。データは、キーと値のペアとしてハッシュマップに保存されます。ハッシュマップは拡張可能です。

loop手動でプロセスを停止するまでの式は、アクションを繰り返します。while式をループして、条件が真である限りアクションを繰り返すことができます。このfor式は、データ収集を反復処理するために使用されます。

この演習では、自動車プログラムを拡張して、繰り返されるアクションをループし、すべての注文を処理しました。注文を追跡するためにハッシュマップを実装しました。

このラーニングパスの次のモジュールでは、Rustコードでエラーと障害がどのように処理されるかについて詳しく説明します。

 リンク: https://docs.microsoft.com/en-us/learn/modules/rust-loop-expressions/

#rust #Beginners 

Joseph  Murray

Joseph Murray

1623924649

Overview Of Prototype Desing Pattern

What is Prototype Design pattern?

prototypes design pattern allows you to create objects by cloning an existing object instead of creating a new object from scratch. This pattern is used when the process of object creation is costly. when cloning, the newly copied object contains the same characteristics as its source object. After cloning, we can change the values of the new object’s properties as required. This pattern comes under a **creational **pattern.

For easier understanding, Assume that a database operation should be done in order to create an object. This database call is both time-consuming and costly. we need to create multiple objects. As a result, when we create a new object, each time a database call will occur, this will give a performance flop, right? As a result, initially, we create one object and clone it each time when we required a new object. This will cut down the database calls for each object creation.

#java-design-pattern #cloning #prototype-design-pattern #java #prototype-pattern #overview of prototype desing pattern

ERIC  MACUS

ERIC MACUS

1647540000

Substrate Knowledge Map For Hackathon Participants

Substrate Knowledge Map for Hackathon Participants

The Substrate Knowledge Map provides information that you—as a Substrate hackathon participant—need to know to develop a non-trivial application for your hackathon submission.

The map covers 6 main sections:

  1. Introduction
  2. Basics
  3. Preliminaries
  4. Runtime Development
  5. Polkadot JS API
  6. Smart Contracts

Each section contains basic information on each topic, with links to additional documentation for you to dig deeper. Within each section, you'll find a mix of quizzes and labs to test your knowledge as your progress through the map. The goal of the labs and quizzes is to help you consolidate what you've learned and put it to practice with some hands-on activities.

Introduction

One question we often get is why learn the Substrate framework when we can write smart contracts to build decentralized applications?

The short answer is that using the Substrate framework and writing smart contracts are two different approaches.

Smart contract development

Traditional smart contract platforms allow users to publish additional logic on top of some core blockchain logic. Since smart contract logic can be published by anyone, including malicious actors and inexperienced developers, there are a number of intentional safeguards and restrictions built around these public smart contract platforms. For example:

Fees: Smart contract developers must ensure that contract users are charged for the computation and storage they impose on the computers running their contract. With fees, block creators are protected from abuse of the network.

Sandboxed: A contract is not able to modify core blockchain storage or storage items of other contracts directly. Its power is limited to only modifying its own state, and the ability to make outside calls to other contracts or runtime functions.

Reversion: Contracts can be prone to undesirable situations that lead to logical errors when wanting to revert or upgrade them. Developers need to learn additional patterns such as splitting their contract's logic and data to ensure seamless upgrades.

These safeguards and restrictions make running smart contracts slower and more costly. However, it's important to consider the different developer audiences for contract development versus Substrate runtime development.

Building decentralized applications with smart contracts allows your community to extend and develop on top of your runtime logic without worrying about proposals, runtime upgrades, and so on. You can also use smart contracts as a testing ground for future runtime changes, but done in an isolated way that protects your network from any errors the changes might introduce.

In summary, smart contract development:

  • Is inherently safer to the network.
  • Provides economic incentives and transaction fee mechanisms that can't be directly controlled by the smart contract author.
  • Provides computational overhead to support graceful logical failures.
  • Has a low barrier to entry for developers and enables a faster pace of community interaction.

Substrate runtime development

Unlike traditional smart contract development, Substrate runtime development offers none of the network protections or safeguards. Instead, as a runtime developer, you have total control over how the blockchain behaves. However, this level of control also means that there is a higher barrier to entry.

Substrate is a framework for building blockchains, which almost makes comparing it to smart contract development like comparing apples and oranges. With the Substrate framework, developers can build smart contracts but that is only a fraction of using Substrate to its full potential.

With Substrate, you have full control over the underlying logic that your network's nodes will run. You also have full access for modifying and controlling each and every storage item across your runtime modules. As you progress through this map, you'll discover concepts and techniques that will help you to unlock the potential of the Substrate framework, giving you the freedom to build the blockchain that best suits the needs of your application.

You'll also discover how you can upgrade the Substrate runtime with a single transaction instead of having to organize a community hard-fork. Upgradeability is one of the primary design features of the Substrate framework.

In summary, runtime development:

  • Provides low level access to your entire blockchain.
  • Removes the overhead of built-in safety for performance.
  • Has a higher barrier of entry for developers.
  • Provides flexibility to customize full-stack application logic.

To learn more about using smart contracts within Substrate, refer to the Smart Contract - Overview page as well as the Polkadot Builders Guide.

Navigating the documentation

If you need any community support, please join the following channels based on the area where you need help:

Alternatively, also look for support on Stackoverflow where questions are tagged with "substrate" or on the Parity Subport repo.

Use the following links to explore the sites and resources available on each:

Substrate Developer Hub has the most comprehensive all-round coverage about Substrate, from a "big picture" explanation of architecture to specific technical concepts. The site also provides tutorials to guide you as your learn the Substrate framework and the API reference documentation. You should check this site first if you want to look up information about Substrate runtime development. The site consists of:

Knowledge Base: Explaining the foundational concepts of building blockchain runtimes using Substrate.

Tutorials: Hand-on tutorials for developers to follow. The first SIX tutorials show the fundamentals in Substrate and are recommended for every Substrate learner to go through.

How-to Guides: These resources are like the O'Reilly cookbook series written in a task-oriented way for readers to get the job done. Some examples of the topics overed include:

  • Setting up proper weight functions for extrinsic calls.
  • Using off-chain workers to fetch HTTP requests.
  • Writing tests for your pallets It can also be read from

API docs: Substrate API reference documentation.

Substrate Node Template provides a light weight, minimal Substrate blockchain node that you can set up as a local development environment.

Substrate Front-end template provides a front-end interface built with React using Polkadot-JS API to connect to any Substrate node. Developers are encouraged to start new Substrate projects based on these templates.

If you face any technical difficulties and need support, feel free to join the Substrate Technical matrix channel and ask your questions there.

Additional resources

Polkadot Wiki documents the specific behavior and mechanisms of the Polkadot network. The Polkadot network allows multiple blockchains to connect and pass messages to each other. On the wiki, you can learn about how Polkadot—built using Substrate—is customized to support inter-blockchain message passing.

Polkadot JS API doc: documents how to use the Polkadot-JS API. This JavaScript-based API allows developers to build custom front-ends for their blockchains and applications. Polkadot JS API provides a way to connect to Substrate-based blockchains to query runtime metadata and send transactions.

Quiz #1

👉 Submit your answers to Quiz #1

Basics

Set up your local development environment

Here you will set up your local machine to install the Rust compiler—ensuring that you have both stable and nightly versions installed. Both stable and nightly versions are required because currently a Substrate runtime is compiled to a native binary using the stable Rust compiler, then compiled to a WebAssembly (WASM) binary, which only the nightly Rust compiler can do.

Also refer to:

Lab #1

👉 Complete Lab #1: Run a Substrate node

Interact with a Substrate network using Polkadot-JS apps

Polkadot JS Apps is the canonical front-end to interact with any Substrate-based chain.

You can configure whichever endpoint you want it to connected to, even to your localhost running node. Refer to the following two diagrams.

  1. Click on the top left side showing your currently connected network:

assets/01-polkadot-app-endpoint.png

  1. Scroll to the bottom of the menu, open DEVELOPMENT, and choose either Local Node or Custom to specify your own endpoint.

assets/02-polkadot-app-select-endpoint.png

Quiz #2

👉 Complete Quiz #2

Lab #2

👉 Complete Lab #2: Using Polkadot-JS Apps

Notes: If you are connecting Apps to a custom chain (or your locally-running node), you may need to specify your chain's custom data types in JSON under Settings > Developer.

Polkadot-JS Apps only receives a series of bytes from the blockchain. It is up to the developer to tell it how to decode and interpret these custom data type. To learn more on this, refer to:

You will also need to create an account. To do so, follow these steps on account generation. You'll learn that you can also use the Polkadot-JS Browser Plugin (a Metamask-like browser extension to manage your Substrate accounts) and it will automatically be imported into Polkadot-JS Apps.

Notes: When you run a Substrate chain in development mode (with the --dev flag), well-known accounts (Alice, Bob, Charlie, etc.) are always created for you.

Lab #3

👉 Complete Lab #3: Create an Account

Preliminaries

You need to know some Rust programming concepts and have a good understanding on how blockchain technology works in order to make the most of developing with Substrate. The following resources will help you brush up in these areas.

Rust

You will need familiarize yourself with Rust to understand how Substrate is built and how to make the most of its capabilities.

If you are new to Rust, or need a brush up on your Rust knowledge, please refer to The Rust Book. You could still continue learning about Substrate without knowing Rust, but we recommend you come back to this section whenever in doubt about what any of the Rust syntax you're looking at means. Here are the parts of the Rust book we recommend you familiarize yourself with:

  • ch 1 - 10: These chapters cover the foundational knowledge of programming in Rust
  • ch 13: On iterators and closures
  • ch 18 - 19: On advanced traits and advanced types. Learn a bit about macros as well. You will not necessarily be writing your own macros, but you'll be using a lot of Substrate and FRAME's built-in macros to write your blockchain runtime.

How blockchains work

Given that you'll be writing a blockchain runtime, you need to know what a blockchain is, and how it works. The **Web3 Blockchain Fundamental MOOC Youtube video series provides a good basis for understanding key blockchain concepts and how blockchains work.

The lectures we recommend you watch are: lectures 1 - 7 and lecture 10. That's 8 lectures, or about 4 hours of video.

Quiz #3

👉 Complete Quiz #3

Substrate runtime development

High level architecture

To know more about the high level architecture of Substrate, please go through the Knowledge Base articles on Getting Started: Overview and Getting Started: Architecture.

In this document, we assume you will develop a Substrate runtime with FRAME (v2). This is what a Substrate node consists of.

assets/03-substrate-architecture.png

Each node has many components that manage things like the transaction queue, communicating over a P2P network, reaching consensus on the state of the blockchain, and the chain's actual runtime logic (aka the blockchain runtime). Each aspect of the node is interesting in its own right, and the runtime is particularly interesting because it contains the business logic (aka "state transition function") that codifies the chain's functionality. The runtime contains a collection of pallets that are configured to work together.

On the node level, Substrate leverages libp2p for the p2p networking layer and puts the transaction pool, consensus mechanism, and underlying data storage (a key-value database) on the node level. These components all work "under the hood", and in this knowledge map we won't cover them in detail except for mentioning their existence.

Quiz #4

👉 Complete Quiz #4

Runtime development topics

In our Developer Hub, we have a thorough coverage on various subjects you need to know to develop with Substrate. So here we just list out the key topics and reference back to Developer Hub. Please go through the following key concepts and the directed resources to know the fundamentals of runtime development.

Key Concept: Runtime, this is where the blockchain state transition function (the blockchain application-specific logic) is defined. It is about composing multiple pallets (can be understood as Rust modules) together in the runtime and hooking them up together.

Runtime Development: Execution, this article describes how a block is produced, and how transactions are selected and executed to reach the next "stage" in the blockchain.

Runtime Develpment: Pallets, this article describes what the basic structure of a Substrate pallet is consists of.

Runtime Development: FRAME, this article gives a high level overview of the system pallets Substrate already implements to help you quickly develop as a runtime engineer. Have a quick skim so you have a basic idea of the different pallets Substrate is made of.

Lab #4

👉 Complete Lab #4: Adding a Pallet into a Runtime

Runtime Development: Storage, this article describes how data is stored on-chain and how you could access them.

Runtime Development: Events & Errors, this page describe how external parties know what has happened in the blockchain, via the emitted events and errors when executing transactions.

Notes: All of the above concepts we leverage on the #[pallet::*] macro to define them in the code. If you are interested to learn more about what other types of pallet macros exist go to the FRAME macro API documentation and this doc on some frequently used Substrate macros.

Lab #5

👉 Complete Lab #5: Building a Proof-of-Existence dApp

Lab #6

👉 Complete Lab #6: Building a Substrate Kitties dApp

Quiz #5

👉 Complete Quiz #5

Polkadot JS API

Polkadot JS API is the javascript API for Substrate. By using it you can build a javascript front end or utility and interact with any Substrate-based blockchain.

The Substrate Front-end Template is an example of using Polkadot JS API in a React front-end.

  • Runtime Development: Metadata, this article describes the API allowing external parties to query what API is open for the chain. Polkadot JS API makes use of a chain's metadata to know what queries and functions are available from a chain to call.

Lab #7

👉 Complete Lab #7: Using Polkadot-JS API

Quiz #6

👉 Complete Quiz #6: Using Polkadot-JS API

Smart contracts

Learn about the difference between smart contract development vs Substrate runtime development, and when to use each here.

In Substrate, you can program smart contracts using ink!.

Quiz #7

👉 Complete Quiz #7: Using ink!

What we do not cover

A lot 😄

On-chain runtime upgrades. We have a tutorial on On-chain (forkless) Runtime Upgrade. This tutorial introduces how to perform and schedule a runtime upgrade as an on-chain transaction.

About transaction weight and fee, and benchmarking your runtime to determine the proper transaction cost.

Off-chain Features

There are certain limits to on-chain logic. For instance, computation cannot be too intensive that it affects the block output time, and computation must be deterministic. This means that computation that relies on external data fetching cannot be done on-chain. In Substrate, developers can run these types of computation off-chain and have the result sent back on-chain via extrinsics.

Tightly- and Loosely-coupled pallets, calling one pallet's functions from another pallet via trait specification.

Blockchain Consensus Mechansim, and a guide on customizing it to proof-of-work here.

Parachains: one key feature of Substrate is the capability of becoming a parachain for relay chains like Polkadot. You can develop your own application-specific logic in your chain and rely on the validator community of the relay chain to secure your network, instead of building another validator community yourself. Learn more with the following resources:

Terms clarification

  • Substrate: the blockchain development framework built for writing highly customized, domain-specific blockchains.
  • Polkadot: Polkadot is the relay chain blockchain, built with Substrate.
  • Kusama: Kusama is Polkadot's canary network, used to launch features before these features are launched on Polkadot. You could view it as a beta-network with real economic value where the state of the blockchain is never reset.
  • Web 3.0: is the decentralized internet ecosystem that, instead of apps being centrally stored in a few servers and managed by a sovereign party, it is an open, trustless, and permissionless network when apps are not controlled by a centralized entity.
  • Web3 Foundation: A foundation setup to support the development of decentralized web software protocols. Learn more about what they do on thier website.

Others


Author: substrate-developer-hub
Source Code: https://github.com/substrate-developer-hub/hackathon-knowledge-map
License: 

#blockchain #substrate 

Samanta  Moore

Samanta Moore

1623835440

Builder Design Pattern

What is Builder Design Pattern ? Why we should care about it ?

Starting from **Creational Design Pattern, **so wikipedia says “creational design pattern are design pattern that deals with object creation mechanism, trying to create objects in manner that is suitable to the situation”.

The basic form of object creations could result in design problems and result in complex design problems, so to overcome this problem Creational Design Pattern somehow allows you to create the object.

Builder is one of the** Creational Design Pattern**.

When to consider the Builder Design Pattern ?

Builder is useful when you need to do lot of things to build an Object. Let’s imagine DOM (Document Object Model), so if we need to create the DOM, We could have to do lot of things, appending plenty of nodes and attaching attributes to them. We could also imagine about the huge XML Object creation where we will have to do lot of work to create the Object. A Factory is used basically when we could create the entire object in one shot.

As **Joshua Bloch (**He led the Design of the many library Java Collections Framework and many more) – “Builder Pattern is good choice when designing the class whose constructor or static factories would have more than handful of parameters

#java #builder #builder pattern #creational design pattern #design pattern #factory pattern #java design pattern

Jaimin Bhavsar

Jaimin Bhavsar

1574217159

An overview of 6 Patterns for Microfrontends

Microfrontends are not a new thing, but certainly a recent trend. Coined in 2016, the pattern slowly gained popularity as problems started to appear when developing large scale web apps.

In this article, we’ll go over the different patterns for creating microfrontends, their advantages and drawbacks, as well as implementation details and examples for each of the presented methods. I will also argue that microfrontends come with some inherited problems that may be solved by going even a step further — into a region that could be either called Modulith or Siteless UI, depending on the point of view.

But let’s go step by step. We start our journey with a historic background.

Background

When the web (i.e., HTTP as transport and HTML as representation) started there was no notion of “design” or “layout”. Instead, text documents have been exchanged. The introduction of the <img> tag changed that all. Together with <table> designers could declare war on good taste. Nevertheless, one problem arises quite quickly: How is it possible to share a common layout across multiple sites? For this purpose two solutions have been proposed:

  1. Use a program to dynamically generate the HTML (slowish, but okay — especially with the powerful capabilities behind the CGI standard)
  2. Use mechanisms already integrated into the web server to replace common parts with other parts

While the former lead to C and Perl web servers, which became PHP and Java then converted to C# and Ruby, before finally emerging at Elixir and Node.js, the latter wasn’t really after 2002. The web 2.0 also demanded more sophisticated tools, which is why server-side rendering using full-blown applications dominated for quite while.

Until Netflix came and told everyone to make smaller services to make cloud vendors rich. Ironically, while Netflix would be ready for their own data centers they are still massively coupled to cloud vendors such as AWS, which also hosts most of the competition including Amazon Prime Video.

The Patterns

In the following, we’ll look at some of the patterns that are possible for actually realizing a microfrontend architecture. We’ll see that “it depends” is in fact the right answer when somebody asks: “what is the right way to implement microfrontends?”. It very much depends on what we are after.

Each section contains a bit of example code and a very simple snippet (sometimes using a framework) to realize that pattern for a proof of concept or even an MVP. In the end, I try to provide a small summary to indicate the target audience according to my personal feelings.

Regardless of the pattern you choose, when integrating separate projects, keeping a consistent UI is always a challenge. Use tools like Bit (Github) to share and collaborate on UI components across microservices.
Patterns for Microfrontends

1. The Web Approach

The most simple method of implementing microfrontends is to deploy a set of small websites (ideally just a single page), which are just linked together. The user goes from website to website by using the links leading to the different servers providing the content.

To keep the layout consistent a pattern library may be used on the server. Each team can implement the server-side rendering as they desire. The pattern library must also be usable on different platforms.

Patterns for Microfrontends

Using the web approach can be as simple as deploying static sites to a server. This could be done with a Docker image as follows:

FROM nginx:stable
COPY ./dist/ /var/www
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
CMD ["nginx -g 'daemon off;'"]

Obviously, we are not restricted to use a static site. We can apply server-side rendering, too. Changing the nginx base image to, e.g., ASP.NET Core allows us to use ASP.NET Core for generating the page. But how is this different to the frontend monolith? In this scenario, we would, for example, take a given microservice, exposed via a web API (i.e., returning something like JSON) and change it to return rendered HTML instead.

Logically, microfrontends in this world are nothing more than a different way of representing our API. Instead of returning “naked” data we generate the view already.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Completely isolated
  • Pro: Most flexible approach
  • Pro: Least complicated approach
  • Con: Infrastructure overhead
  • Con: Inconsistent user experience
  • Con: Internal URLs exposed to outside

2. Server-Side Composition

This is the true microfrontend approach. Why? As we have seen, microfrontends have been supposed to be run server-side. As such the whole approach works independently for sure. When we have a dedicated server for each small frontend snippet we may really call this micro frontend.

In a diagrammatic form, we may end up with a sketch like the one below.
Patterns for Microfrontends

The complexity of this solution lies totally in the reverse proxy layer. How the different smaller sites are combined into one site can be tricky. Especially things like caching rules, tracking, and other tricky bits will bite us at night.

In some sense, this adds a kind of gateway layer to the first approach. The reverse proxy combines different sources into a single delivery. While the tricky bits certainly need to (and can) be solved somehow.

http {
  server {
    listen 80;
    server_name www.example.com;
    location /api/ {
      proxy_pass http://api-svc:8000/api;
    }
    location /web/admin {
      proxy_pass http://admin-svc:8080/web/admin;
    }
    location /web/notifications {
      proxy_pass http://public-svc:8080/web/notifications;
    }
    location / {
      proxy_pass /;
    }
  }
}

A little bit more powerful would be something like using Varnish Reverse Proxy.

In addition, we find that this is also a perfect use case for ESI (abbr. Edge-Side Includes) — which is the (much more flexible) successor to the historic Server-Side Includes (SSI).

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>Demo</title>
  </head>
  <body>
    <esi:include src="http://header-service.example.com/" />
    <esi:include src="http://checkout-service.example.com/" />
    <esi:include
      src="http://navigator-service.example.com/"
      alt="http://backup-service.example.com/"
    />
    <esi:include src="http://footer-service.example.com/" />
  </body>
</html>

A similar setup can be seen with the Tailor backend service, which is a part of Project Mosaic.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Completely isolated
  • Pro: Looks embedded to user
  • Pro: Very flexible approach
  • Con: Enforces coupling between the components
  • Con: Infrastructure complexity
  • Con: Inconsistent user experience

3. Client-Side Composition

At this point one may be wondering: Do we need the reverse proxy? As this is a backend component we may want to avoid this altogether. The solution is client-side composition. In the simplest form this can be implemented with the use of <iframe> elements. Communication between the different parts is done via the postMessage method.
Patterns for Microfrontends

Note: The JavaScript part may be replaced with “browser” in case of an <iframe>. In this case the potential interactivity is certainly different.

As already suggested by the name, this pattern tries to avoid the infrastructure overhead coming with a reverse proxy. Instead, since microfrontends contain already the term “frontend” the whole rendering is left to the client. The advantage is that starting with this pattern serverless may be possible. In the end the whole UI could be uploaded to, e.g., a GitHub pages repository and everything just works.

As outlined, the composition can be done with quite simple methods, e.g., just an <iframe>. One of the major pain points, however, is how such integrations will look to the end user. The duplication in terms of resource needs is also quite substantial. A mix with pattern 1 is definitely possible, where the different parts are being placed on independently operated web servers.

Nevertheless, in this pattern knowledge is required again — so component 1 already knows that component 2 exists and needs to be used. Potentially, it even needs to know how to use it.

Considering the following parent (i.e., delivered application or website):

https://gist.github.com/dddbf80773ed8df03fcf20679f30c835

We can write a page that enables the direct communication path:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <title>Microfrontend</title>
  </head>
  <body>
    <h1>Child</h1>
    <p><button id="message_button">Send message to parent</button></p>
    <div id="results"></div>

    <script>
      const results = document.querySelector("#results");
      const messageButton = document.querySelector("#message_button");
      function sendMessage(msg) {
        window.parent.postMessage(msg, "*");
      }
      window.addEventListener("message", function(e) {
        results.innerHTML = e.data;
      });
      messageButton.addEventListener("click", function(e) {
        sendMessage(Math.random().toString());
      });
    </script>
  </body>
</html>

If we do not consider frames an option we could also go for web components. Here, communication could be done via the DOM by using custom events. However, already at this point in time it may make sense to consider client-side rendering instead of client-side composition; as rendering implies the need for a JavaScript client (which aligns with the web component approach).

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Completely isolated
  • Pro: Looks embedded to user
  • Pro: Serverless possible
  • Con: Enforces coupling between the components
  • Con: Inconsistent user experience
  • Con: May require JavaScript / no seamless integration

4. Client-Side Rendering

While client-side composition may work without JavaScript (e.g., only using frames that do not rely on communication with the parent or each other), client-side rendering will fail without JavaScript. In this space we already start creating a framework in the composing application. This framework has to be respected by all microfrontends brought in. At least they need to use it for being mounted properly.

The pattern looks as follows.
Patterns for Microfrontends

Quite close to the client-side composition, right? In this case the JavaScript part may not be replaced. The important difference is that server-side rendering is in general off the table. Instead, pieces of data are exchanged, which are then transformed into a view.

Depending on the designed or used framework the pieces of data may determine the location, point in time, and interactivity of the rendered fragment. Achieving a high degree of interactivity is no problem with this pattern.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Enforces separation of concerns
  • Pro: Provides loose coupling of the components
  • Pro: Looks embedded to the user
  • Con: Requires more logic in the client
  • Con: Inconsistent user experience
  • Con: Requires JavaScript

5. SPA Composition

Why should we stop at client-side rendering using a single technology? Why not just obtain a JavaScript file and run it besides all the other JavaScript files? The benefit of this is the potential use of multiple technologies side-by-side.
Patterns for Microfrontends

It is up for debate if running multiple technologies (independent if its in the backend or the frontend — granted, in the backend it may be more “acceptable”) is a good thing or something to avoid, however, there are scenarios where multiple technologies need to work together.

From the top of my head:

  • Migration scenarios
  • Support of a specific third-party technology
  • Political issues
  • Team constraints

Either way, the emerged pattern could be drawn as below.

So what is going on here? In this case delivering just some JavaScript with the app shell is no longer optional — instead, we need to deliver a framework that is capable of orchestrating the microfrontends.

The orchestration of the different modules boils down to the management of a lifecycle: mounting, running, unmounting. The different modules can be taken from independently running servers, however, their location must be already known in the application shell.

Implementing such a framework requires at least some configuration, e.g., a map of the scripts to include:

const scripts = [
  'https://example.com/script1.js',
  'https://example.com/script2.js',
];
const registrations = {};

function activityCheck(name) {
  const current = location.hash;
  const registration = registrations[name];

  if (registration) {
    if (registration.activity(current) !== registration.active) {
      if (registration.active) {
        registration.lifecycle.unmount();
      } else {
        registration.lifecycle.mount();
      }

      registration.active = !registration.active;
    }
  }
}

window.addEventListener('hashchange', function () {
  Object.keys(registrations).forEach(activityCheck);
});

window.registerApp = function(name, activity, lifecycle) {
  registrations[name] = {
    activity,
    lifecycle,
    active: false,
  };
  activityCheck(name);
}

scripts.forEach(src => {
  const script = document.createElement('script');
  script.src = src;
  document.body.appendChild(script);
});

The lifecycle management may be more complicated than the script above. Thus a module for such a composition needs to come with some structure applied — at least an exported mount and unmount function.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Enforces separation of concerns
  • Pro: Gives developers much freedom
  • Pro: Looks embedded to the user
  • Con: Enforces duplication and overhead
  • Con: Inconsistent user experience
  • Con: Requires JavaScript

6. Siteless UIs

This topic deserves its own article, but since we are listing all the patterns I don’t want to omit it here. Taking the approach of SPA composition all we miss is a decoupling (or independent centralization) of the script sources from the services, as well as a shared runtime.

Both things are done for a reason:

  • The decoupling makes sure that UI and service responsibilities are not mixed up; this also enables serverless computing
  • The shared runtime is the cure for the resource intense composition given by the previous pattern

Both things combined yield benefits to the frontend as “serverless functions” did for the backend. They also come with similar challenges:

  • The runtime cannot be just updated — it must stay / remain consistent with the modules
  • Debugging or running the modules locally require an emulator of the runtime
  • Not all technologies may be supported equally

The diagram for siteless UIs looks as follows.
Patterns for Microfrontends

The main advantage of this design is that sharing of useful or common resources is supported. Sharing a pattern library makes a lot of sense.

All in all the architecture diagram looks quite similar to the SPA composition mentioned earlier. However, the feed service and the coupling to a runtime bring additional benefits (and challenges to be solved by any framework in that space).

The big advantage is that once these challenges are cracked the development experience is supposed to be excellent. The user experience can be fully customized, treating the modules as flexible opt-in pieces of functionality. A clear separation between feature (the respective implementation) and permission (the right to access the feature) is thus possible.

One of the easiest implementations of this pattern is the following:

// app-shell/main.js
window.app = {
  registerPage(url, cb) {}
  // ...
};

showLoading();

fetch("https://feed.piral.io/api/v1/pilet/sample")
  .then(res => res.json())
  .then(body =>
    Promise.all(
      body.items.map(
        item =>
          new Promise(resolve => {
            const script = document.createElement("script");
            script.src = item.link;
            script.onload = resolve;
            document.body.appendChild(script);
          })
      )
    )
  )
  .catch(err => console.error(err))
  .then(() => hideLoading());

// module/index.jsx
import * as React from "react";
import { render } from "react-dom";
import { Page } from "./Page";

if (window.app !== undefined) {
  window.app.registerPage("/sample", element => {
    render(<Page />, element);
  });
}

This uses a global variable to share the API from the app shell. However, we already see several challenges using this approach:

  • What if one module crashes?
  • How to share dependencies (to avoid bundling them with each module, as shown in the simple implementation)?
  • How to get proper typing?
  • How is this being debugged?
  • How is proper routing done?

Implementing all of these features is a topic of its own. Regarding the debugging we should follow the same approach as all the serverless frameworks (e.g., AWS Lambda, Azure Functions) do. We should just ship an emulator that behaves like the real thing later on; except that it is running locally and works offline.

In this space we find the following solutions:

What are pros and cons of this approach?

  • Pro: Enforces separation of concerns
  • Pro: Supports sharing of resources to avoid overhead
  • Pro: Consistent and embedded user experience
  • Con: Strict dependency management for shared resources necessary
  • Con: Requires another piece of (potentially managed) infrastructure
  • Con: Requires JavaScript

A Framework for Microfrontends

Finally, we should have a look at how one of the provided frameworks can be used to implement microfrontends. We go for Piral as this is the one I’m most familiar with.

In the following we approach the problem from two sides. First, we start with a module (i.e., microfrontend) in this context. Then we’ll walk over creating an app shell.

For the module we use my Mario5 toy project. This is a project that started several years ago with a JavaScript implementation of Super Mario called “Mario5”. It was followed up by a TypeScript tutorial / rewrite named “Mario5TS”, which has been kept up-to-date since then.

For the app shell we utilize the sample Piral instance. This one shows all concepts in one sweep. It’s also always kept up-to-date.

Let’s start with a module, which in the Piral framework is called a pilet. At its core, a pilet has a JavaScript root module which is usually located in src/index.tsx.

A Pilet

Starting with an empty pilet gives us the following root module:

import { PiletApi } from "sample-piral";

export function setup(app: PiletApi) {}

We need to export a specially named function called setup. This one will be used later to integrate the specific parts of our application.

Using React we could for instance register a menu item or a tile to be always displayed:

import "./Styles/tile.scss";
import * as React from "react";
import { Link } from "react-router-dom";
import { PiletApi } from "sample-piral";

export function setup(app: PiletApi) {
  app.registerMenu(() => <Link to="/mario5">Mario 5</Link>);

  app.registerTile(
    () => (
      <Link to="/mario5" className="mario-tile">
        Mario5
      </Link>
    ),
    {
      initialColumns: 2,
      initialRows: 2
    }
  );
}

Since our tile requires some styling we also add a stylesheet to our pilet. Great, so far so good. All the directly included resources will be always available on the app shell.

Now its time to also integrate the game itself. We decide to put it in a dedicated page, even though a modal dialog may also be cool. All the code sits in mario.ts and works against the standard DOM — no React yet.

As React supports also manipulation of hosted nodes we use a reference hook to attach the game.

import "./Styles/tile.scss";
import * as React from "react";
import { Link } from "react-router-dom";
import { PiletApi } from "sample-piral";
import { appendMarioTo } from "./mario";

export function setup(app: PiletApi) {
  app.registerMenu(() => <Link to="/mario5">Mario 5</Link>);

  app.registerTile(
    () => (
      <Link to="/mario5" className="mario-tile">
        Mario5
      </Link>
    ),
    {
      initialColumns: 2,
      initialRows: 2
    }
  );

  app.registerPage("/mario5", () => {
    const host = React.useRef();
    React.useEffect(() => {
      const gamePromise = appendMarioTo(host.current, {
        sound: true
      });
      gamePromise.then(game => game.start());
      return () => gamePromise.then(game => game.pause());
    });
    return <div ref={host} />;
  });
}

Theoretically, we could also add further functionality such as resuming the game or lazy loading the side-bundle containing the game. Right now only sounds are lazy loaded through calls to the import() function.

Starting the pilet will be done via

https://gist.github.com/0678ac99f4653843159cf8e54ad06422

which uses the Piral CLI under the hood. The Piral CLI is always installed locally, but could be also installed globally to get commands such as pilet debug directly available in the command line.

Building the pilet can be also done with the local installation.

npm run build-pilet

The App Shell

Now its time to create an app shell. Usually, we would already have an app shell (e.g., the previous pilet was created already for the sample app shell), but in my opinion its more important to see how development of a module goes.

Creating an app shell with Piral is as simple as installing piral. To make it even more simple the Piral CLI also supports scaffolding of a new app shell.

Either way we will most likely end up with something like this:

import * as React from "react";
import { render } from "react-dom";
import { createInstance, Piral, Dashboard } from "piral";
import { Layout, Loader } from "./layout";

const instance = createInstance({
  requestPilets() {
    return fetch("https://feed.piral.io/api/v1/pilet/sample")
      .then(res => res.json())
      .then(res => res.items);
  }
});

const app = (
  <Piral instance={instance}>
    <SetComponent name="LoadingIndicator" component={Loader} />
    <SetComponent name="Layout" component={Layout} />
    <SetRoute path="/" component={Dashboard} />
  </Piral>
);

render(app, document.querySelector("#app"));

Here we do three things:

  1. We set up all the imports and libraries
  2. We create the Piral instance; supplying all functional options (most importantly declaring where the pilets come from)
  3. We render the app shell using components and a custom layout that we defined

The actual rendering is done from React.

Building the app shell is straight forward — in the end its a standard bundler (Parcel) to process the whole application. The output is a folder containing all files to be placed on a webserver or static storage.

Points of Interest

Coining the term “Siteless UI” potentially requires a bit of explanation. I’ll try to start with the name first: As seen, it is a direct reference to “Serverless Computing”. While serverless may also be a good term for the used technology, it may also be misleading and wrong. UIs can generally be deployed on serverless infrastructures (e.g., Amazon S3, Azure Blob Storage, Dropbox). This is one of the benefits of “rendering the UI on the client” instead of doing server-side rendering.

However, I wanted to follow the approach “having an UI that cannot live without a host” kind a thing. Same as with serverless functions, which require a runtime sitting somewhere and could not start otherwise.

Let’s compare the similarities. First, let’s start of with the conjunction that microfrontends should be to frontend UIs what microservices have been to backend services. In this case we should have:

  • can start standalone
  • provide independent URLs
  • have independent lifetime (startup, recycle, shutdown)
  • do independent deployments
  • define an independent user / state management
  • run as a dedicated webserver somewhere (e.g., as a docker image)
  • if combined, we use a gateway-like structure

Great, certainly some things that apply here and there, however, note that some of this contradicts with SPA composition and, as a result, with siteless UIs, too.

Now let’s compare this to an analogy that we can do if the conjecture changes to: siteless UIs should be to frontend UIs what serverless functions have been to backend services. In this case we have:

  • require a runtime to start
  • provide URLs within the environment given by the runtime
  • are coupled to the lifetime defined by the runtime
  • do independent deployments
  • defined partially independent, partially shared (but controlled / isolated) user / state management
  • run on a non-operated infrastructure somewhere else (e.g., on the client)

If you agree that this reads like a perfect analogy — awesome! If not, please provide your points in the comments blow. I’ll still try to see where this is going and if this seems like a great idea to follow. Much appreciated!

#Microfrontend #cloud