Visual Studio 2019 Tips and Tricks

Visual Studio 2019 Tips and Tricks

Microsoft recently released the preview of Visual Studio 2019 for Windows, and it’s got lots of improvements and features! After reading the release notes, I reached out to Allison Buchholtz-Au and Kendra Havens on the Visual Studio team at Microsoft to get an idea of what features should be configured immediately after downloading and installing this shiny new IDE. After they showed me a couple of quick tips, VS 2019 started to feel like the superhero that can make anyone more effective as an ASP.NET developer. Let’s dive into the good stuff!


Download Visual Studio 2019

First, download Visual Studio 2019 for your Windows operating system. If you don’t have it, you can download Visual Studio 2019 Preview for free.


Decompiled Resources

Visibility into how an external library or dependency resource is handling the data you are giving it can provide valuable insight. Visual Studio 2019 now provides this feature, but you do need to set it up. Follow the steps below to enable it.

Go to the top menu bar. Select Tools > Options. Type “decompile” into the search bar. The Advanced section of Text Editor for C# will appear. Click on Advanced.

Check the box that says Enable navigation to decompiled sources (experimental). Now you can debug and step into the external packages you pull in from Nuget and elsewhere! I found this incredibly useful right away.


Code Cleanup

Similar to Format Document, this new feature allows you to configure a predefined set of several rules to clean up in your code all at once. To set this up, follow the steps below.

Click the little broom icon at the bottom of the window. Select Configure Code Cleanup.

You will be presented with a configuration menu. Notice that there are two profiles available to you to add or remove filters from. This is to allow a couple of different cleanups to be configured at the same time, and you can select whichever profile you need. You cannot add more profiles at this time. I found this to be helpful to set up different filters for a solution that has both front end and back end projects inside of it.

Select any filter you want and press the up arrow [ ^ ] to add it. Click OK.

Now you can run Code Cleanup for the profile you added your filter to. Click the broom icon again and select Run Code Cleanup (Profile 1).

In this example, not only did it sort the usings at the top of the auto-generated file, but it also removed the unnecessary ones. I added tons of filters after this to all run at once! This is probably one of my favorite new features in Visual Studio 2019.


Editorconfig Export for your Team

Interested in sharing editor configuration with your team, or importing your team’s standard one? Sometimes the tabs vs spaces battle is due to what a developer has set up locally in their IDE. One solution is to export the agreed upon code styling standards and distribute it to everyone. This is a handy new feature in Visual Studio 2019. Follow below to set it up!

To export preferred code styles, go to the top menu bar. Select Tools > Options. Type “code style” into the search bar. The Text Editor for C# will appear. Click on Code Style. Notice the list under formatting that you can set up and configure. Once you have everything just as you like it, click on Generate .editorconfig file from settings.

It will prompt you to save the configuration file. For this example, name it test.editorconfig. I would suggest saving it somewhere within your team’s code repository for this solution or project. That ensures it comes along with the project when cloned.

Now let’s import the code styles into your local Visual Studio 2019 environment. Right-click on the Solution, go to Add and select Existing Item.

Navigate to the saved location of your test.editorconfig file and click Add. Your environment will now use the imported configuration settings set up for your team.

If you are interested in the script generated for that, click on test.editorconfig right underneath Solution Items name and take a look!


Solution Filter

Ever had a monolithic solution with way too many projects inside of it? Does it all take a while to load? Now you can save the state of your solution with only the desired projects loaded up. Follow along to set up a better solution when you are working on that huge team repository.

First, open up the whole solution with all of the projects in it. This may take a while to load.

Unload the projects you aren’t using by right-clicking on each project and select Unload Project.

Now that you have only the desired projects loaded up, you can save the state of the solution by right-clicking on the Solution, and selecting Save as Solution Filter.

Save the file and name it bettersolution.snlf at the top of the repo. Close the solution. Now open the new solution you just saved.

It should load MUCH faster this time.

Notice that only your chosen projects are loaded, and all of the unloaded ones are left completely out now - leaving a clean and relevant solution file. However, in the event that you would like to add back all of the unloaded projects, the solution filter still retains that context for you. If you right-click on the Solution name and select Load All Projects, everything will be restored.

This was a pretty nice way to keep everything organized and loading fast when first opening up the solution in Visual Studio 2019. This can be really refreshing for any enterprise developer who deals with multiple projects.


But Wait, There’s More!

Interested in what else this new release has to offer? Get more information about the features that are enabled right out of the box with Visual Studio 2019 by checking out the Release Notes and FAQ on Microsoft’s preview site.


Learn More

VS Code extensions you may not have heard of before

Debugging Angular CLI Applications in Visual Studio Code

Getting Started with Python in Visual Studio Code

Build Productive Python Web Apps with VS Code and Azure

VS Code: C++ Development with Visual Studio Code

What VS Code Extensions you should consider using?

Creating a Python Class Generator for VS Code

Originally published by Heather Downing at https://developer.okta.com

Angular Architecture Patterns and Best Practices (that help to scale)

Angular Architecture Patterns and Best Practices (that help to scale)
<p class="ql-align-center">Originally published by Bartosz Pietrucha at angular-academy.com</p><p> In order to deal with mentioned factors to maintain a high quality of delivery and prevent technical debt, robust and well-grounded architecture is necessary. Angular itself is a quite opinionated framework, forcing developers to do things the proper way, yet there are a lot of places where things can go wrong. In this article, I will present high-level recommendations of well-designed Angular application architecture based on best practices and battle-proven patterns. Our ultimate goal in this article is to learn how to design Angular application in order to maintain sustainable development speed and ease of adding new features in the long run. To achieve these goals, we will apply:</p>
    <li>proper abstractions between application layers,</li><li>unidirectional data flow,</li><li>reactive state management,</li><li>modular design,</li><li>smart and dumb components pattern.</li>
<p></p>

Problems of scalability in front-end

<p>Let's think about problems in terms of scalability we can face in the development of modern front-end applications. Today, front-end applications are not "just displaying" data and accepting user inputs. Single Page Applications (SPAs) are providing users with rich interactions and use backend mostly as a data persistence layer. This means, far more responsibility has been moved to the front-end part of software systems. This leads to a growing complexity of front-end logic, we need to deal with. Not only the number of requirements grows over time, but the amount of data we load into the application is increasing. On top of that, we need to maintain application performance, which can easily be hurt. Finally, our development teams are growing (or at least rotating - people come and go) and it is important for new-comers to get up to speed as fast as possible.</p><p></p><p>One of the solutions to the problems described above is solid system architecture. But, this comes with the cost, the cost of investing in that architecture from day one. It can be very tempting for us developers, to deliver new features very quickly, when the system is still very small. At this stage, everything is easy and understandable, so development goes really fast. But, unless we care about the architecture, after a few developers rotations, tricky features, refactorings, a couple of new modules, the speed of development slows down radically. Below diagram presents how it usually looked like in my development career. This is not any scientifical study, it's just how I see it.</p><p></p>

Software architecture

<p>To discuss architecture best practices and patterns, we need to answer a question, what the software architecture is, in the first place. Martin Fowlerdefines architecture as "highest-level breakdown of a system into its parts". On top of that, I would say that software architecture describes how the software is composed of its parts and what are the rules and constraints of the communication between those parts. Usually, the architectural decisions that we make in our system development, are hard to change as the system grows over time. That's why it is very important to pay attention to those decisions from the very beginning of our project, especially if the software we build is supposed to be running in production for many years. Robert C. Martin once said: the true cost of software is its maintenance. Having well-grounded architecture helps to reduce the costs of the system's maintenance.</p>
Software architecture is the way the software is composed of its parts and the rules and constraints of the communication between those parts

High-level abstraction layers

<p>The first way, we will be decomposing our system, is through the abstraction layers. Below diagram depicts the general concept of this decomposition. The idea is to place the proper responsibility into the proper layer of the system: coreabstraction or presentation layer. We will be looking at each layer independently and analyzing its responsibility. This division of the system also dictates communication rules. For example, the presentation layer can talk to the core layer only through the abstractionlayer. Later, we will learn what are the benefits of this kind of constraint.</p><p></p>

Presentation layer

<p>Let's start analyzing our system break-down from the presentation layer. This is the place where all our Angular components live. The only responsibilities of this layer are to present and to delegate. In other words, it presents the UI and delegates user's actions to the core layer, through the abstraction layer. It knows what to display and what to do, but it does not know how the user's interactions should be handled.</p><p>Below code snippet contains CategoriesComponent using SettingsFacadeinstance from abstraction layer to delegate user's interaction (via addCategory() and updateCategory()) and present some state in its template (via isUpdating$).</p><pre class="ql-syntax" spellcheck="false">@Component({ selector: 'categories', templateUrl: './categories.component.html', styleUrls: ['./categories.component.scss'] }) export class CategoriesComponent implements OnInit {

@Input() cashflowCategories$: CashflowCategory[];
newCategory: CashflowCategory = new CashflowCategory();
isUpdating$: Observable<boolean>;

constructor(private settingsFacade: SettingsFacade) {
this.isUpdating$ = settingsFacade.isUpdating$();
}

ngOnInit() {
this.settingsFacade.loadCashflowCategories();
}

addCategory(category: CashflowCategory) {
this.settingsFacade.addCashflowCategory(category);
}

updateCategory(category: CashflowCategory) {
this.settingsFacade.updateCashflowCategory(category);
}

}
</pre>

Abstraction layer

<p>The abstraction layer decouples the presentation layer from the core layer and also has it's very own defined responsibilities. This layer exposes the streams of state and interface for the components in the presentation layer, playing the role of the facade. This kind of facade sandboxes what components can see and do in the system. We can implement facades by simply using Angular class providers. The classes here may be named with Facade postfix, for example SettingsFacade. Below, you can find an example of such a facade.</p><pre class="ql-syntax" spellcheck="false">@Injectable()
export class SettingsFacade {

constructor(private cashflowCategoryApi: CashflowCategoryApi, private settingsState: SettingsState) { }

isUpdating$(): Observable<boolean> {
return this.settingsState.isUpdating$();
}

getCashflowCategories$(): Observable<CashflowCategory[]> {
// here we just pass the state without any projections
// it may happen that it is necessary to combine two or more streams and expose to the components
return this.settingsState.getCashflowCategories$();
}

loadCashflowCategories() {
return this.cashflowCategoryApi.getCashflowCategories()
.pipe(tap(categories => this.settingsState.setCashflowCategories(categories)));
}

// optimistic update
// 1. update UI state
// 2. call API
addCashflowCategory(category: CashflowCategory) {
this.settingsState.addCashflowCategory(category);
this.cashflowCategoryApi.createCashflowCategory(category)
.subscribe(
(addedCategoryWithId: CashflowCategory) => {
// success callback - we have id generated by the server, let's update the state
this.settingsState.updateCashflowCategoryId(category, addedCategoryWithId)
},
(error: any) => {
// error callback - we need to rollback the state change
this.settingsState.removeCashflowCategory(category);
console.log(error);
}
);
}

// pessimistic update
// 1. call API
// 2. update UI state
updateCashflowCategory(category: CashflowCategory) {
this.settingsState.setUpdating(true);
this.cashflowCategoryApi.updateCashflowCategory(category)
.subscribe(
() => this.settingsState.updateCashflowCategory(category),
(error) => console.log(error),
() => this.settingsState.setUpdating(false)
);
}
}
</pre>

Abstraction interface

<p>We already know the main responsibilities for this layer; to expose streams of state and interface for the components. Let's start with the interface. Public methods loadCashflowCategories()addCashflowCategory() and updateCashflowCategory() abstract away the details of state management and the external API calls from the components. We are not using API providers (like CashflowCategoryApi) in components directly, as they live in the core layer. Also, how the state changes is not a concern of the components. The presentation layer should not care about how things are done and components should just call the methods from the abstraction layer when necessary (delegate). Looking at the public methods in our abstraction layer should give us a quick insight about high-level use casesin this part of the system.</p><p>But we should remember that the abstraction layer is not a place to implement business logic. Here we just want to connect the presentation layer to our business logic, abstracting the way it is connected.</p>

State

<p>When it comes to the state, the abstraction layer makes our components independent of the state management solution. Components are given Observables with data to display on the templates (usually with async pipe) and don't care how and where this data comes from. To manage our state we can pick any state management library that supports RxJS (like NgRx) or simple use BehaviorSubjects to model our state. In the example above we are using state object that internally uses BehaviorSubjects (state object is a part of our core layer). In the case of NgRx, we would dispatch actions for the store.</p><p>Having this kind abstraction gives us a lot of flexibility and allows to change the way we manage state not even touching the presentation layer. It's even possible to seamlessly migrate to a real-time backend like Firebase, making our application real-time. I personally like to start with BehaviorSubjects to manage the state. If later, at some point in the development of the system, there is a need to use something else, with this kind of architecture, it is very easy to refactor.</p>

Synchronization strategy

<p>Now, let's take a closer look at the other important aspect of the abstraction layer. Regardless of the state management solution we choose, we can implement UI updates in either optimistic or pessimistic fashion. Imagine we want to create a new record in the collection of some entities. This collection was fetched from the backend and displayed in the DOM. In a pessimistic approach, we first try to update the state on the backend side (for example with HTTP request) and in case of success we update the state in the frontend application. On the other hand, in an optimistic approach, we do it in a different order. First, we assume that the backend update will succeed and update frontend state immediately. Then we send request to update server state. In case of success, we don't need to do anything, but in case of failure, we need to rollback the change in our frontend application and inform the user about this situation.</p>
Optimistic update changes the UI state first and attempts to update the backend state. This provides a user with a better experience, as he does not see any delays, because of network latency. If backend update fails, then UI change has to be rolled back.
Pessimistic update changes the backend state first and only in case of success updates the UI state. Usually, it is necessary to show some kind of spinner or loading bar during the execution of backend request, because of network latency.

Caching

<p>Sometimes, we may decide that the data we fetch from the backend will not be a part of our application state. This may be useful for read-only data that we don't want to manipulate at all and just pass (via abstraction layer) to the components. In this case, we can apply data caching in our facade. The easiest way to achieve it is to use shareReplay() RxJS operator that will replay the last value in the stream for each new subscriber. Take a look at the code snippet below with RecordsFacade using RecordsApi to fetch, cache and filter the data for the components.</p><pre class="ql-syntax" spellcheck="false">@Injectable()
export class RecordsFacade {

private records$: Observable<Record[]>;

constructor(private recordApi: RecordApi) {
this.records$ = this.recordApi
.getRecords()
.pipe(shareReplay(1)); // cache the data
}

getRecords() {
return this.records$;
}

// project the cached data for the component
getRecordsFromPeriod(period?: Period): Observable<Record[]> {
return this.records$
.pipe(map(records => records.filter(record => record.inPeriod(period))));
}

searchRecords(search: string): Observable<Record[]> {
return this.recordApi.searchRecords(search);
}
}
</pre><p>To sum up, what we can do in the abstraction layer is to:</p>

    <li>expose methods for the components in which we:</li><li class="ql-indent-1">delegate logic execution to the core layer,</li><li class="ql-indent-1">decide about data synchronization strategy (optimistic vs. pessimistic),</li><li>expose streams of state for the components:</li><li class="ql-indent-1">pick one or more streams of UI state (and combine them if necessary),</li><li class="ql-indent-1">cache data from external API.</li>
<p>As we see, the abstraction layer plays an important role in our layered architecture. It has clearly defined responsibilities what helps to better understand and reason about the system. Depending on your particular case, you can create one facade per Angular module or one per each entity. For example, the SettingsModule may have a single SettingsFacade, if it's not too bloated. But sometimes it may be better to create more-granular abstraction facades for each entity individually, like UserFacade for Userentity.</p>

Core layer

<p>The last layer is the core layer. Here is where core application logic is implemented. All data manipulation and outside world communicationhappen here. If for state management, we were using a solution like NgRx, here is a place to put our state definition, actions and reducers. Since in our examples we are modeling state with BehaviorSubjects, we can encapsulate it in a convenient state class. Below, you can find SettingsState example from the core layer.</p><pre class="ql-syntax" spellcheck="false">@Injectable()
export class SettingsState {

private updating$ = new BehaviorSubject<boolean>(false);
private cashflowCategories$ = new BehaviorSubject<CashflowCategory[]>(null);

isUpdating$() {
return this.updating$.asObservable();
}

setUpdating(isUpdating: boolean) {
this.updating$.next(isUpdating);
}

getCashflowCategories$() {
return this.cashflowCategories$.asObservable();
}

setCashflowCategories(categories: CashflowCategory[]) {
this.cashflowCategories$.next(categories);
}

addCashflowCategory(category: CashflowCategory) {
const currentValue = this.cashflowCategories$.getValue();
this.cashflowCategories$.next([...currentValue, category]);
}

updateCashflowCategory(updatedCategory: CashflowCategory) {
const categories = this.cashflowCategories$.getValue();
const indexOfUpdated = categories.findIndex(category => category.id === updatedCategory.id);
categories[indexOfUpdated] = updatedCategory;
this.cashflowCategories$.next([...categories]);
}

updateCashflowCategoryId(categoryToReplace: CashflowCategory, addedCategoryWithId: CashflowCategory) {
const categories = this.cashflowCategories$.getValue();
const updatedCategoryIndex = categories.findIndex(category => category === categoryToReplace);
categories[updatedCategoryIndex] = addedCategoryWithId;
this.cashflowCategories$.next([...categories]);
}

removeCashflowCategory(categoryRemove: CashflowCategory) {
const currentValue = this.cashflowCategories$.getValue();
this.cashflowCategories$.next(currentValue.filter(category => category !== categoryRemove));
}
}
</pre><p>In the core layer, we also implement HTTP queries in the form of class providers. This kind of class could have Api or Service name postfix. API services have only one responsibility - it is just to communicate with API endpoints and nothing else. We should avoid any caching, logic or data manipulation here. A simple example of API service can be found below.</p><pre class="ql-syntax" spellcheck="false">@Injectable()
export class CashflowCategoryApi {

readonly API = '/api/cashflowCategories';

constructor(private http: HttpClient) {}

getCashflowCategories(): Observable<CashflowCategory[]> {
return this.http.get<CashflowCategory[]>(this.API);
}

createCashflowCategory(category: CashflowCategory): Observable<any> {
return this.http.post(this.API, category);
}

updateCashflowCategory(category: CashflowCategory): Observable<any> {
return this.http.put(${this.API}/${category.id}, category);
}

}
</pre><p>In this layer, we could also place any validators, mappers or more advanced use-cases that require manipulating many slices of our UI state.</p><p>We have covered the topic of the abstraction layers in our frontend application. Every layer has it's well-defined boundaries and responsibilities. We also defined the strict rules of communication between layers. This all helps to better understand and reason about the system over time as it becomes more and more complex.</p><p></p><p>If you need help with your project, check out Angualar Academy Workshops or write an email to [email protected].</p>

Unidirectional data flow and reactive state management

<p>The next principle we want to introduce in our system is about the data flow and propagation of change. Angular itself uses unidirectional data flow on presentation level (via input bindings), but we will impose a similar restriction on the application level. Together with reactive state management (based on streams), it will give us the very important property of the system - data consistency. Below diagram presents the general idea of unidirectional data flow.</p><p></p><p>Whenever any model value change in our application, Angular change detection system takes care of the propagation of that change. It does it via input property bindings from the top to bottom of the whole component tree. It means that a child component can only depend on its parent and never vice versa. This is why we call it unidirectional data flow. This allows Angular to traverse the components tree only once (as there are no cycles in the tree structure) to achieve a stable state, which means that every value in the bindings is propagated.</p><p>As we know from previous chapters, there is the core layer above the presentation layer, where our application logic is implemented. There are the services and providers that operate on our data. What if we apply the same principle of data manipulation on that level? We can place the application data (the state) in one place "above" the components and propagate the values down to the components via Observable streams (Redux and NgRx call this place a store). The state can be propagated to multiple components and displayed in multiple places, but never modified locally. The change may come only "from above" and the components below only "reflect" the current state of the system. This gives us the important system's property mentioned before - data consistency - and the state object becomes the single source of truth. Practically speaking, we can display the same data in multiple places and not be afraid that the values would differ.</p><p>Our state object exposes the methods for the services in our core layer to manipulate the state. Whenever there is a need to change the state, it can happen only by calling a method on the state object (or dispatching an action in case of using NgRx). Then, the change is propagated "down", via streams, the to presentation layer (or any other service). This way, our state management is reactive. Moreover, with this approach, we also increase the level of predictability in our system, because of strict rules of manipulating and sharing the application state. Below you can find a code snippet modeling the state with BehaviorSubjects.</p><pre class="ql-syntax" spellcheck="false">@Injectable()
export class SettingsState {

private updating$ = new BehaviorSubject<boolean>(false);
private cashflowCategories$ = new BehaviorSubject<CashflowCategory[]>(null);

isUpdating$() {
return this.updating$.asObservable();
}

setUpdating(isUpdating: boolean) {
this.updating$.next(isUpdating);
}

getCashflowCategories$() {
return this.cashflowCategories$.asObservable();
}

setCashflowCategories(categories: CashflowCategory[]) {
this.cashflowCategories$.next(categories);
}

addCashflowCategory(category: CashflowCategory) {
const currentValue = this.cashflowCategories$.getValue();
this.cashflowCategories$.next([...currentValue, category]);
}

updateCashflowCategory(updatedCategory: CashflowCategory) {
const categories = this.cashflowCategories$.getValue();
const indexOfUpdated = categories.findIndex(category => category.id === updatedCategory.id);
categories[indexOfUpdated] = updatedCategory;
this.cashflowCategories$.next([...categories]);
}

updateCashflowCategoryId(categoryToReplace: CashflowCategory, addedCategoryWithId: CashflowCategory) {
const categories = this.cashflowCategories$.getValue();
const updatedCategoryIndex = categories.findIndex(category => category === categoryToReplace);
categories[updatedCategoryIndex] = addedCategoryWithId;
this.cashflowCategories$.next([...categories]);
}

removeCashflowCategory(categoryRemove: CashflowCategory) {
const currentValue = this.cashflowCategories$.getValue();
this.cashflowCategories$.next(currentValue.filter(category => category !== categoryRemove));
}
}

</pre><p>Let's recap the steps of handling the user interaction, having in mind all the principles we have already introduced. First, let's imagine that there is some event in the presentation layer (for example button click). The component delegates the execution to the abstraction layer, calling the method on the facade settingsFacade.addCategory(). Then, the facade calls the methods on the services in the core layer - categoryApi.create() and settingsState.addCategory(). The order of invocation of those two methods depends on synchronization strategy we choose (pessimistic or optimistic). Finally, the application state is propagated down to the presentation layer via the observable streams. This process is well-defined.</p><p></p>

Modular design

<p>We have covered the horizontal division in our system and the communication patterns across it. Now we are going to introduce a vertical separation into feature modules. The idea is to slice the application into feature modules representing different business functionalities. This is yet another step to deconstruct the system into smaller pieces for better maintainability. Each of the features modules share the same horizontal separation of the core, abstraction, and presentation layer. It is important to note, that these modules could be lazily loaded (and preloaded) into the browser increasing the initial load time of the application. Below you can find a diagram illustrating features modules separation.</p><p></p><p>Our application has also two additional modules for more technical reasons. We have a CoreModule that defines our singleton services, single-instance components, configuration, and export any third-party modules needed in AppModule. This module is imported only once in AppModule. The second module is SharedModule that contains common components/pipes/directives and also export commonly used Angular modules (like CommonModule). SharedModule can be imported by any feature module. The diagram below presents the imports structure.</p><p></p>

Module directory structure

<p>Below diagram presents how we can place all the pieces of our SettingsModule inside the directories. We can put the files inside of the folders with a name representing their function.</p><p></p>

Smart and dumb components

<p>The final architectural pattern we introduce in this article is about components themselves. We want to divide components into two categories, depending on their responsibilities. First, are the smart components (aka containers). These components usually:</p>
    <li>have facade/s and other services injected,</li><li>communicate with the core layer,</li><li>pass data to the dumb components,</li><li>react to the events from dumb components,</li><li>are top-level routable components (but not always!).</li>
<p>Previously presented CategoriesComponent is smart. It has SettingsFacadeinjected and uses it to communicate with the core layer of our application.</p><p>In the second category, there are dumb components (aka presentational). Their only responsibilities are to present UI element and to delegate user interaction "up" to the smart components via events. Think of a native HTML element like <button>Click me</button>. That element does not have any particular logic implemented. We can think of the text 'Click me' as an input for this component. It also has some events that can be subscribed to, like click event. Below you can find a code snippet of a simple presentational component with one input and no output events.</p><pre class="ql-syntax" spellcheck="false">@Component({
selector: 'budget-progress',
templateUrl: './budget-progress.component.html',
styleUrls: ['./budget-progress.component.scss'],
changeDetection: ChangeDetectionStrategy.OnPush
})
export class BudgetProgressComponent {

@Input()
budget: Budget;
today: string;

}
</pre>

Summary

<p>We have covered a couple of ideas on how to design the architecture of an Angular application. These principles, if applied wisely, can help to maintain sustainable development speed over time, and allow new features to be delivered easily. Please don't treat them as some strict rules, but rather recommendations that could be employed when they make sense.</p><p>We have taken a close look at the abstractions layers, unidirectional data flow, and reactive state management, modular design, and smart/dumb components pattern. I hope that these concepts will be helpful in your projects and, as always, if you have any questions, I am more than happy to chat with you.</p><p class="ql-align-center">Originally published by Bartosz Pietrucha at angular-academy.com</p><p>==========================================</p><p>Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter</p>

Learn More

<p>☞ Angular 8 (formerly Angular 2) - The Complete Guide</p><p>☞ Complete Angular 8 from Zero to Hero | Get Hired</p><p>☞ Learn and Understand AngularJS</p><p>☞ The Complete Angular Course: Beginner to Advanced</p><p>☞ Angular Crash Course for Busy Developers</p><p>☞ Angular Essentials (Angular 2+ with TypeScript)</p><p>☞ Angular (Full App) with Angular Material, Angularfire & NgRx</p><p>☞ Angular & NodeJS - The MEAN Stack Guide</p>

Why Django is the Best Web Framework for Charting Uncharted Territory

Why Django is the Best Web Framework for Charting Uncharted Territory

Why Django is the Best Web Framework for Charting Uncharted Territory?

Wrapping CommonJS library in Angular 8 using Mark.js

Wrapping CommonJS library in Angular 8 using Mark.js

Introduction

Time to time on my daily tasks I have to implement some functionality that was already implemented by someone previously in a neat vanillaJS library, but… no Angular version or even ES6 module of it is available to be able to easily grab it into your Angular 8 application.

Yes, you can attach this lib in index.html with script tag but from my point of view, it hardens maintainability. Also, you should do the same for another Angular project where you might use it.

Much better for is create Angular wrapper directive (or component) and publish it as npm package so everyone (and you of course:) can easily re-use it in another project.

One of such libraries is mark.js — quite solid solution for highlighting search text inside a specified webpage section.
Wrapping CommonJS library in Angular 8 using Mark.js

<figcaption class="av ej me mf hr do dm dn mg mh aq cv">mark.js</figcaption>

How mark.js works

In original implementation mark.js can be connected to a project in two ways:

$ npm install mark.js --save-dev// in JS code
const Mark = require('mark.js');
let instance = new Mark(document.querySelector("div.context"));
instance.mark(keyword [, options]);OR<script src="vendor/mark.js/dist/mark.min.js"></script>// in JS code
let instance = new Mark(document.querySelector("div.context"));
instance.mark(keyword [, options]);

And the result looks like this:
Wrapping CommonJS library in Angular 8 using Mark.js

<figcaption class="av ej me mf hr do dm dn mg mh aq cv">mark.js run result (taken from [official Mark.js page](https://markjs.io/configurator.html))</figcaption>

You can play with it more on mark.js configurator page.

But can we use it in Angular way? Say, like this

// some Angular module
imports: [
...
MarkjsModule // imports markjsHighlightDirective
...
]// in some component template
<div class="content_wrapper" 
     [markjsHighlight]="searchValue"
     [markjsConfig]="config"
>

Let's also add some additional functionality. Say, scroll content_wrapper to first highlighted word:

<div class="content_wrapper" 
     [markjsHighlight]="searchText"
     [markjsConfig]="config"
     [scrollToFirstMarked]="true"
>

Now let's implement and publish Angular library with a demo application that will contain markjsHighlightDirective and its module.
We will name it ngx-markjs.

Planning Angular project structure

To generate an Angular project for our lib we will use Angular CLI.

npm install -g @angular/cli

Now let's create our project and add ngx-markjs lib to it:

ng new ngx-markjs-demo --routing=false --style=scss
// a lot of installations goes herecd ngx-markjs-demong generate lib ngx-markjs

And now lets add markjsHighlightDirective starter to our ngx-markjs lib

ng generate directive markjsHighlight --project=ngx-markjs

After deleting ngx-markjs.component.ts and ngx-markjs.service.ts in projects/ngx-markjs/src/lib/ folder which were created automatically by Angular CLI we will get next directory structure for our project:
Wrapping CommonJS library in Angular 8 using Mark.js

<figcaption class="av ej me mf hr do dm dn mg mh aq cv">ngx-markjs-demo project with ngx-markjs lib</figcaption>

To conveniently build our library lets add two more lines in a project package.json file to scripts section:

"scripts": {
  "ng": "ng",
  "start": "ng serve --port 4201",
  "build": "ng build",
  "build:ngx-markjs": "ng build ngx-markjs && npm run copy:lib:dist", 
  "copy:lib:dist": "cp -r ./projects/ngx-markjs/src ./dist/ngx-markjs/src",
  "test": "ng test",
  "lint": "ng lint",
  "e2e": "ng e2e"
},

build:ngx-markjs — runs build for ngx-markjs library (but not for parent demo project)

copy:lib:dist — it is convenient to have source files in npm packages as well, so this command will copy library sources to /dist/ngx-markjs folder (where compiled module will be placed after build:ngx-markjs command).

Now time to add implementation code!

*Remark: official Angular documentation about creating libraries recommends generating starter without main parent project, like this:
ng new my-workspace — create-application=false
But I decided to keep the main project and make it a demo application just for my convenience.

Connecting commonJS lib into Angular app

We need to do a few preparational steps before we start implementing our directive:

#1. Load mark.js

Mark.js library which we wan to wrap is provided in CommonJS format.

There are two ways to connect script in CommonJS script:

a) Add it with script tag to index.html:

<script src="vendor/mark.js/dist/mark.min.js"></script>

b) Add it to angular.json file in a project root so Angular builder will grab and applied it (as if it was included with a script tag)

"sourceRoot": "src",
"prefix": "app",
"architect": {
  "build": {
    "builder": "@angular-devkit/build-angular:browser",
    "options": {
      "outputPath": "dist/ngx-markjs-demo",
      "index": "src/index.html",
      "main": "src/main.ts",
      "polyfills": "src/polyfills.ts",
      "tsConfig": "tsconfig.app.json",
      "aot": false,
      "assets": [
        "src/favicon.ico",
        "src/assets"
      ],
      "styles": [
        "src/styles.scss"
      ],
      "scripts": []
    },

#2. Adding mark.js to lib package.json

Now we should add mark.js lib as a dependency to our library package.json in <root>/projects/ngx-markjs/src folder (don't mix it up with src/package.json — file for main parent project).
We can add it as peerDependencies section — in that case, you should install mark.js manually prior to installing our wrapper package.

Or we can add mark.js to dependencies section — then mark.js package will be installed automatically when you run npm i ngx-markjs.

You can read more about the difference between package.json dependencies and peerDependencies in this great article.

#3. Get entity with require call.

const Mark = require('mark.js');

In our case, I would prefer to use require since mark.js code should be present only inside markjsHighlight lib module but not in whole application (until we use actually it there).

Small remark: some tslint configurations prevent using require to stimulate using es6 modules, so in that case just wrap require with _/ tslint: disabled /_ comment. Like this:

/* tslint:disable */
const Mark = require('mark.js');
/* tslint:enable */

The project is ready. Now it is time to implement our markjsHighlightDirective.

Wrapping mark.js in a directive

Ok, so lets plan how our markjsHighlightDirective will work:

  1. It should be applied to the element with content — to get HTML element content where the text will be searched. (markjsHighlight input)
  2. It should accept mark.js configuration object (markjsConfig input)
  3. And we should be able to switch on and off 'scroll to marked text' feature (scrollToFirstMarked input)

For example:

<div class="content_wrapper" 
     [markjsHighlight]="searchText"
     [markjsConfig]="config"
     [scrollToFirstMarked]="true"
>

Now it is time to implement these requirements.

Adding mark.js to the library

Install mark.js to our project

npm install mark.js

And create its instance in a projects/ngx-markjs/src/lib/markjs-highlight.directive.ts file:

import {Directive} from '@angular/core';


declare var require: any;
const Mark = require('mark.js');


@Directive({
  selector: '[markjsHighlight]'
})
export class MarkjsHighlightDirective {

  constructor() {}

}

To prevent Typescript warnings — I declared require global variable.

Creating a basic directive starter

The very first starter for MarkjsHighlightDirective will be

@Directive({
  selector: '[markjsHighlight]' // our directive
})
export class MarkjsHighlightDirective implements OnChanges {

  @Input() markjsHighlight = '';  // our inputs
  @Input() markjsConfig: any = {};
  @Input() scrollToFirstMarked: boolean = false;

  @Output() getInstance = new EventEmitter<any>();

  markInstance: any;

  constructor(
    private contentElementRef: ElementRef, // host element ref
    private renderer: Renderer2 // we will use it to scroll
  ) {
  }

  ngOnChanges(changes) {  //if searchText is changed - redo marking    if (!this.markInstance) { // emit mark.js instance (if needeed)
      this.markInstance = new Mark(this.contentElementRef.nativeElement);
      this.getInstance.emit(this.markInstance);
    }

    this.hightlightText(); // should be implemented    if (this.scrollToFirstMarked) {
      this.scrollToFirstMarkedText();// should be implemented
    }    
  }
}

Ok, so let's go through this starter code:

  1. We defined three inputs for searchText value, config and scrolling on/off functionality (as we planned earlier)
  2. ngOnChanges lifeCycle hook emits instance of Mark.js to parent component (in case you want to implement some additional Mark.js behavior)
    Also, each time searchText is changed we should redo text highlight (since search text is different now) — this functionality will be implemented in this.hightlightText method.
    And if scrollToFirstMarked is set to true — then we should run this.scrollToFirstMarkedText.

Implementing highlight functionality

Our method this.hightlightText should receive searchText value, unmark previous search results and do new text highlighting. It can be successfully done with this code:

hightlightText() {
  this.markjsHighlight = this.markjsHighlight || '';   if (this.markjsHighlight && this.markjsHighlight.length <= 2) {
    this.markInstance.unmark();
    return;  } else {    this.markInstance.unmark({
      done: () => {
        this.markInstance.mark((this.markjsHighlight || ''), this.markjsConfig);
      }
    });
  }
}

Code is self-explanatory: we check if markjsHighlight valur is not null or undefined (because with these values Mark.js instances throw the error).

Then check for text length. If it is just one letter or no text at all — we unmark text and return;

Otherwise, we unmark previously highlighted text and start new highlighting process.

Implementing a "scroll to first marked result" feature

One important remark here before we start implementing scroll feature: content wrapper element, where we apply our directive to should have css position set other than static (for example_position: relative_). Otherwise offset to be scrolled to will be calculated improperly.

OK, lets code this.scrollToFirstMarkedText method:

constructor(
  private contentElementRef: ElementRef,
  private renderer: Renderer2
) {
}
....scrollToFirstMarkedText() {
  const content = this.contentElementRef.nativeElement;// calculating offset to the first marked element
  const firstOffsetTop = (content.querySelector('mark') || {}).offsetTop || 0;   this.scrollSmooth(content, firstOffsetTop); // start scroll
}

scrollSmooth(scrollElement, firstOffsetTop) {
  const renderer = this.renderer;

  if (cancelAnimationId) {
    cancelAnimationFrame(cancelAnimationId);
  }
  const currentScrollTop = scrollElement.scrollTop;
  const delta = firstOffsetTop - currentScrollTop;

  animate({
    duration: 500,
    timing(timeFraction) {
      return timeFraction;
    },
    draw(progress) {
      const nextStep = currentScrollTop + progress * delta;     // set scroll with Angular renderer
     renderer.setProperty(scrollElement, 'scrollTop', nextStep);
    }
  });
}...
let cancelAnimationId;// helper function for smooth scroll
function animate({timing, draw, duration}) {
  const start = performance.now();
  cancelAnimationId = requestAnimationFrame(function animate2(time) {
    // timeFraction goes from 0 to 1
    let timeFraction = (time - start) / duration;
    if (timeFraction > 1) {
      timeFraction = 1;
    }
    // calculate the current animation state
    const progress = timing(timeFraction);
    draw(progress); // draw it
    if (timeFraction < 1) {
      cancelAnimationId = requestAnimationFrame(animate2);
    }
  });
}

How it works:

  1. We get content wrapper element (injected in a constructor by Angular) and query for first highlighted text node (Mark.js to highlight text wrap it in <Mark></Mark> HTML element).
  2. Then start **this.scrollSmooth**function. scrollSmooth cancels previous scroll (if any), calculates scroll difference, delta (diff between current scroll position and offsetTop of marked element) and call an animated function which will calculate timings for smooth scrolling and do actual scroll (by calling renderer.setProperty(scrollElement, ‘scrollTop’, nextStep)).
  3. Animate function is a helper taken from a very good javascript learning tutorial site javscript.info.

Our directive is ready! You can take a look at a full code here.

The only thing we have to do yet is to add a directive to NgxMarkjsModule module:

import { NgModule } from '@angular/core';
import { MarkjsHighlightDirective } from './markjs-highlight.directive';



@NgModule({
  declarations: [MarkjsHighlightDirective],
  imports: [
  ],
  exports: [MarkjsHighlightDirective]
})
export class NgxMarkjsModule { }

Applying Result

Now let's use it in our demo application:

1. Import NgxMarkjsModule to app.module.ts:

...
import {NgxMarkjsModule} from 'ngx-markjs';

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    NgxMarkjsModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

2. I added some content to app.component.html and applied the directive to it:

<div class="search_input">
  <input placeholder="Search..." #search type="text">
</div>
<div class="content_wrapper"
     [markjsHighlight]="searchText$ | async"
     [markjsConfig]="searchConfig"
     [scrollToFirstMarked]="true"
>
  <p>Lorem ipsum dolor ssit amet, consectetur...a lot of text futher</p>

3. In app.component.ts we should subscribe to input change event and feed search text to markjsHighlight directive with async pipe:

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent implements AfterViewInit {
  title = 'ngx-markjs-demo';
  @ViewChild('search', {static: false}) searchElemRef: ElementRef;
  searchText$: Observable<string>;
  searchConfig = {separateWordSearch: false};

  ngAfterViewInit() {
    // create stream from inpout change event with rxjs 'from' function    this.searchText$ = fromEvent(this.searchElemRef.nativeElement, 'keyup').pipe(
      map((e: Event) => (e.target as HTMLInputElement).value),
      debounceTime(300),
      distinctUntilChanged()
    );
  }
}

Let's start it and take a look at result:

ng serve

Wrapping CommonJS library in Angular 8 using Mark.js

<figcaption class="av ej me mf hr do dm dn mg mh aq cv">It works!</figcaption>

We did it!
The last thing to do: we should publish our directive to npm registry:

npm login
npm build:ngx-markjs
cd ./dist/ngx-markjs
npm publish

And here it is in a registry: ngx-markjs.

Conclusion

Did you meet some neat vanillaJS library which you want to use in Angular? Now you know how to do that!

Pros

  1. Now we can easily import our directive in Angular 8 project.
  2. Additional scroll functionality is quite neat — use it to improve user experience.

Cons

  1. Possibly mark.js implemented only for a browser. So if you plan to use it in some other platforms (Angular allows it — read more about it here) — it may not work.

Related links:

  1. Mark.js
  2. ngx-markjs github repo.

Build RESTful API In Laravel 5.8 Example

Build RESTful API In Laravel 5.8 Example


If you want to create web services with php than i will must suggest to use laravel 5.8 to create apis because laravel provide structure with authentication using passport. Based on structure it will become a very easily way to create rest apis.

Just Few days ago, laravel released it's new version as laravel 5.8. As we know laravel is a more popular because of security feature. So many of the developer choose laravel to create rest api for mobile app developing. Yes Web services is a very important when you create web and mobile developing, because you can create same database and work with same data.

Follow bellow few steps to create restful api example in laravel 5.8 app.

Step 1: Download Laravel 5.8

I am going to explain step by step from scratch so, we need to get fresh Laravel 5.8 application using bellow command, So open your terminal OR command prompt and run bellow command:

<pre class="ql-syntax" spellcheck="false">composer create-project --prefer-dist laravel/laravel blog </pre>

Step 2: Install Passport

In this step we need to install passport via the Composer package manager, so one your terminal and fire bellow command:

<pre class="ql-syntax" spellcheck="false">composer require laravel/passport </pre>

After successfully install package, we require to get default migration for create new passport tables in our database. so let's run bellow command.

<pre class="ql-syntax" spellcheck="false">php artisan migrate </pre>

Next, we need to install passport using command, Using passport:install command, it will create token keys for security. So let's run bellow command:

<pre class="ql-syntax" spellcheck="false">php artisan passport:install </pre>

Step 3: Passport Configuration

In this step, we have to configuration on three place model, service provider and auth config file. So you have to just following change on that file.

In model we added HasApiTokens class of Passport,

In AuthServiceProvider we added "Passport::routes()",

In auth.php, we added api auth configuration.

app/User.php

<pre class="ql-syntax" spellcheck="false"><?php

namespace App;

use Illuminate\Notifications\Notifiable;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Laravel\Passport\HasApiTokens;
use Illuminate\Foundation\Auth\User as Authenticatable;

class User extends Authenticatable implements MustVerifyEmail
{
use HasApiTokens, Notifiable;

/**
 * The attributes that are mass assignable.
 *
 * @var array
 */
protected $fillable = [
    'name', 'email', 'password',
];

/**
 * The attributes that should be hidden for arrays.
 *
 * @var array
 */
protected $hidden = [
    'password', 'remember_token',
];

}
</pre>

app/Providers/AuthServiceProvider.php

<pre class="ql-syntax" spellcheck="false"><?php

namespace App\Providers;

use Laravel\Passport\Passport;
use Illuminate\Support\Facades\Gate;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;

class AuthServiceProvider extends ServiceProvider
{
/**
* The policy mappings for the application.
*
* @var array
*/
protected $policies = [
'App\Model' => 'App\Policies\ModelPolicy',
];

/**
 * Register any authentication / authorization services.
 *
 * @return void
 */
public function boot()
{
    $this-&gt;registerPolicies();

    Passport::routes();
}

}
</pre>

config/auth.php

<pre class="ql-syntax" spellcheck="false"><?php

return [
.....
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
'api' => [
'driver' => 'passport',
'provider' => 'users',
],
],
.....
]
</pre>

Step 4: Add Product Table and Model

next, we require to create migration for posts table using Laravel 5.8 php artisan command, so first fire bellow command:

<pre class="ql-syntax" spellcheck="false">php artisan make:migration create_products_table
</pre>

After this command you will find one file in following path database/migrations and you have to put bellow code in your migration file for create products table.

<pre class="ql-syntax" spellcheck="false"><?php

use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;

class CreateProductsTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('products', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->text('detail');
$table->timestamps();
});
}

/**
 * Reverse the migrations.
 *
 * @return void
 */
public function down()
{
    Schema::dropIfExists('products');
}

}
</pre>

After create migration we need to run above migration by following command:

<pre class="ql-syntax" spellcheck="false">php artisan migrate
</pre>

After create "products" table you should create Product model for products, so first create file in this path app/Product.php and put bellow content in item.php file:


app/Product.php

<pre class="ql-syntax" spellcheck="false"><?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Product extends Model
{
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
'name', 'detail'
];
}
</pre>

Step 5: Create API Routes

In this step, we will create api routes. Laravel provide api.php file for write web services route. So, let's add new route on that file.

routes/api.php

<pre class="ql-syntax" spellcheck="false"><?php

/*
|--------------------------------------------------------------------------

API Routes
Here is where you can register API routes for your application. These
routes are loaded by the RouteServiceProvider within a group which
is assigned the "api" middleware group. Enjoy building your API!

*/

Route::post('register', 'API\[email protected]');

Route::middleware('auth:api')->group( function () {
Route::resource('products', 'API\ProductController');
});
</pre>

Step 6: Create Controller Files

in next step, now we have create new controller as BaseController, ProductController and RegisterController, i created new folder "API" in Controllers folder because we will make alone APIs controller, So let's create both controller:

app/Http/Controllers/API/BaseController.php

<pre class="ql-syntax" spellcheck="false"><?php

namespace App\Http\Controllers\API;

use Illuminate\Http\Request;
use App\Http\Controllers\Controller as Controller;

class BaseController extends Controller
{
/**
* success response method.
*
* @return \Illuminate\Http\Response
*/
public function sendResponse($result, $message)
{
$response = [
'success' => true,
'data' => $result,
'message' => $message,
];

    return response()-&gt;json($response, 200);
}

/**
 * return error response.
 *
 * @return \Illuminate\Http\Response
 */
public function sendError($error, $errorMessages = [], $code = 404)
{
	$response = [
        'success' =&gt; false,
        'message' =&gt; $error,
    ];

    if(!empty($errorMessages)){
        $response['data'] = $errorMessages;
    }

    return response()-&gt;json($response, $code);
}

}
</pre>

app/Http/Controllers/API/ProductController.php

<pre class="ql-syntax" spellcheck="false"><?php

namespace App\Http\Controllers\API;

use Illuminate\Http\Request;
use App\Http\Controllers\API\BaseController as BaseController;
use App\Product;
use Validator;

class ProductController extends BaseController
{
/**
* Display a listing of the resource.
*
* @return \Illuminate\Http\Response
*/
public function index()
{
$products = Product::all();

    return $this-&gt;sendResponse($products-&gt;toArray(), 'Products retrieved successfully.');
}

/**
 * Store a newly created resource in storage.
 *
 * @param  \Illuminate\Http\Request  $request
 * @return \Illuminate\Http\Response
 */
public function store(Request $request)
{
    $input = $request-&gt;all();

    $validator = Validator::make($input, [
        'name' =&gt; 'required',
        'detail' =&gt; 'required'
    ]);

    if($validator-&gt;fails()){
        return $this-&gt;sendError('Validation Error.', $validator-&gt;errors());       
    }

    $product = Product::create($input);

    return $this-&gt;sendResponse($product-&gt;toArray(), 'Product created successfully.');
}

/**
 * Display the specified resource.
 *
 * @param  int  $id
 * @return \Illuminate\Http\Response
 */
public function show($id)
{
    $product = Product::find($id);

    if (is_null($product)) {
        return $this-&gt;sendError('Product not found.');
    }

    return $this-&gt;sendResponse($product-&gt;toArray(), 'Product retrieved successfully.');
}

/**
 * Update the specified resource in storage.
 *
 * @param  \Illuminate\Http\Request  $request
 * @param  int  $id
 * @return \Illuminate\Http\Response
 */
public function update(Request $request, Product $product)
{
    $input = $request-&gt;all();

    $validator = Validator::make($input, [
        'name' =&gt; 'required',
        'detail' =&gt; 'required'
    ]);

    if($validator-&gt;fails()){
        return $this-&gt;sendError('Validation Error.', $validator-&gt;errors());       
    }

    $product-&gt;name = $input['name'];
    $product-&gt;detail = $input['detail'];
    $product-&gt;save();

    return $this-&gt;sendResponse($product-&gt;toArray(), 'Product updated successfully.');
}

/**
 * Remove the specified resource from storage.
 *
 * @param  int  $id
 * @return \Illuminate\Http\Response
 */
public function destroy(Product $product)
{
    $product-&gt;delete();

    return $this-&gt;sendResponse($product-&gt;toArray(), 'Product deleted successfully.');
}

}
</pre>

app/Http/Controllers/API/RegisterController.php

<pre class="ql-syntax" spellcheck="false"><?php

namespace App\Http\Controllers\API;

use Illuminate\Http\Request;
use App\Http\Controllers\API\BaseController as BaseController;
use App\User;
use Illuminate\Support\Facades\Auth;
use Validator;

class RegisterController extends BaseController
{
/**
* Register api
*
* @return \Illuminate\Http\Response
*/
public function register(Request $request)
{
$validator = Validator::make($request->all(), [
'name' => 'required',
'email' => 'required|email',
'password' => 'required',
'c_password' => 'required|same:password',
]);

    if($validator-&gt;fails()){
        return $this-&gt;sendError('Validation Error.', $validator-&gt;errors());       
    }

    $input = $request-&gt;all();
    $input['password'] = bcrypt($input['password']);
    $user = User::create($input);
    $success['token'] =  $user-&gt;createToken('MyApp')-&gt;accessToken;
    $success['name'] =  $user-&gt;name;

    return $this-&gt;sendResponse($success, 'User register successfully.');
}

}
</pre>

Now we are ready to to run full restful api and also passport api in laravel. so let's run our example so run bellow command for quick run:

<pre class="ql-syntax" spellcheck="false">php artisan serve
</pre>

make sure in details api we will use following headers as listed bellow:

<pre class="ql-syntax" spellcheck="false">'headers' => [
'Accept' => 'application/json',
'Authorization' => 'Bearer '.$accessToken,
]
</pre>

Here is Routes URL with Verb:

1) Login: Verb:GET, URL:http://localhost:8000/oauth/token

2) Register: Verb:GET, URL:http://localhost:8000/api/register

3) List: Verb:GET, URL:http://localhost:8000/api/products

4) Create: Verb:POST, URL:http://localhost:8000/api/products

5) Show: Verb:GET, URL:http://localhost:8000/api/products/{id}

6) Update: Verb:PUT, URL:http://localhost:8000/api/products/{id}

7) Delete: Verb:DELETE, URL:http://localhost:8000/api/products/{id}

Now simply you can run above listed url like as bellow screen shot:

Login API:

Register API:

Product List API:

Product Create API:

Product Show API:

Product Update API:

Product Delete API:

I hope it can help you...

Thanks for reading ❤

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Learn More

PHP with Laravel for beginners - Become a Master in Laravel

Projects in Laravel: Learn Laravel Building 10 Projects

Laravel for RESTful: Build Your RESTful API with Laravel

Fullstack Web Development With Laravel and Vue.js

Creating RESTful APIs with NodeJS and MongoDB Tutorial

Developing RESTful APIs with Lumen (A PHP Micro-framework)

Build a Simple REST API in PHP

Node.js and Express Tutorial: Building and Securing RESTful APIs

Building a Vue SPA With Laravel

Build a CMS with Laravel and Vue

Design patterns in Node.js: a practical guide

Design patterns in Node.js: a practical guide

What are design patterns?

Design patterns, simply put, are a way for you to structure your solution’s code in a way that allows you to gain some kind of benefit. Such as faster development speed, code reusability, and so on.

All patterns lend themselves quite easily to the OOP paradigm. Although given JavaScript’s flexibility, you can implement these concepts in non-OOP projects as well.

When it comes to design patterns, there are way too many of them to cover in just one article, in fact, books have been written exclusively about this topic and every year new patterns are created, leaving their lists incomplete.

A very common classification for the pattern is the one used in the GoF book (The Gang of Four Book) but since I’m going to be reviewing just a handful of them, I will ignore the classification and simply present you with a list of patterns you can see and start using in your code right now.

Immediately Invoked Function Expressions (IIFE)

The first pattern I’m going to show you is one that allows you to define and call a function at the same time. Due to the way JavaScript scopes works, using IIFEs can be great to simulate things like private properties in classes. In fact, this particular pattern is sometimes used as part of the requirements of other, more complex ones. We’ll see how in a bit.

What does an IIFE look like?

But before we delve into the use cases and the mechanics behind it, let me quickly show you what it looks like exactly:

<pre class="ql-syntax" spellcheck="false">(function() { var x = 20; var y = 20; var answer = x + y; console.log(answer); })(); </pre>

By pasting the above code into a Node.js REPL or even your browser’s console, you’d immediately get the result because, as the name suggests, you’re executing the function as soon as you define it.

The template for an IIFE consists of an anonymous function declaration, inside a set of parenthesis (which turn the definition into a function expression, a.k.a an assignment) and then a set of calling parenthesis at the end tail of it. Like so:

<pre class="ql-syntax" spellcheck="false">(function(/*received parameters*/) { //your code here })(/*parameters*/) </pre>

Use cases

Although it might sound crazy, there are actually a few benefits and use cases where using an IIFE can be a good thing, for example:

Simulating static variables

Remember static variables? From other languages such as C or C# for example. If you’re not familiar with them, a static variable gets initialized the first time you use it, and then it takes the value that you last set it to. The benefit being that if you define a static variable inside a function, that variable will be common to all instances of the function, no matter how many times you call it, so it greatly simplifies cases like this:

<pre class="ql-syntax" spellcheck="false">function autoIncrement() { static let number = 0 number++ return number } </pre>

The above function would return a new number every time we call it (assuming, of course, the static keyword is available for us in JS). We could do this with generators in JS, that’s true, but pretend we don’t have access to them, you could simulate a static variable like this:

let autoIncrement = (function() {
    let number = 0

    return function () {
     number++
     return number
    }
})()

What you’re seeing in there, is the magic of closures all wrapped up inside an IIFE. Pure magic. You’re basically returning a new function that will be assigned to the autoIncrement variable (thanks to the actual execution of the IIFE). And with the scoping mechanics of JS, your function will always have access to the number variable (as if it were a global variable).

Simulating private variables

As you may (or may not, I guess) already know, ES6 classes treat every member as public, meaning, there are no private properties or methods. That’s out of the question, but thanks to IIFEs you could potentially simulate that if you wanted to.

<pre class="ql-syntax" spellcheck="false">const autoIncrementer = (function() { let value = 0; return { incr() { value++ }, get value() { return value } }; })(); > autoIncrementer.incr() undefined > autoIncrementer.incr() undefined > autoIncrementer.value 2 > autoIncrementer.value = 3 3 > autoIncrementer.value 2 </pre>

The above code shows you a way to do it. Although you’re not specifically defining a class which you can instantiate afterward, mind you, you are defining a structure, a set of properties and methods which can make use of variables that are common to the object you’re creating, but that are not accessible (as shown through the failed assignment) from outside.


**You might also enjoy:  **Node.js 12 - The future of Server-side JavaScript


Factory method pattern

This one, in particular, is one of my favorite patterns, since it acts as a tool you can implement to clean your code up a bit.

In essence, the factory method allows you to centralize the logic of creating objects (meaning, which object to create and why) in a single place. This allows you to forget about that part and focus on simply requesting the object you need and then using it.

This might seem like a small benefit, but bear with me for a second, it’ll make sense, trust me.

What does the factory method pattern look like?

This particular pattern would be easier to understand if you first look at its usage, and then at its implementation.

Here is an example:

<pre class="ql-syntax" spellcheck="false">( _ => { let factory = new MyEmployeeFactory() let types = ["fulltime", "parttime", "contractor"] let employees = []; for(let i = 0; i < 100; i++) { employees.push(factory.createEmployee({type: types[Math.floor( (Math.random(2) * 2) )]}) )} //.... employees.forEach( e => { console.log(e.speak()) }) })() </pre>

The key takeaway from the above code is the fact that you’re adding objects to the same array, all of which share the same interface (in the sense they have the same set of methods) but you don’t really need to care about which object to create and when to do it.

You can now look at the actual implementation, as you can see, there is a lot to look at, but it’s quite straightforward:

<pre class="ql-syntax" spellcheck="false">class Employee { speak() { return "Hi, I'm a " + this.type + " employee" } } class FullTimeEmployee extends Employee{ constructor(data) { super() this.type = "full time" //.... } } class PartTimeEmployee extends Employee{ constructor(data) { super() this.type = "part time" //.... } } class ContractorEmployee extends Employee{ constructor(data) { super() this.type = "contractor" //.... } } class MyEmployeeFactory { createEmployee(data) { if(data.type == 'fulltime') return new FullTimeEmployee(data) if(data.type == 'parttime') return new PartTimeEmployee(data) if(data.type == 'contractor') return new ContractorEmployee(data) } } </pre>

Use case

The previous code already shows a generic use case, but if we wanted to be more specific, one particular use case I like to use this pattern for is handling error object creation.

Imagine having an Express application with about 10 endpoints, wherein every endpoint you need to return between two to three errors based on the user input. We’re talking about 30 sentences like the following:

<pre class="ql-syntax" spellcheck="false">if(err) { res.json({error: true, message: “Error message here”}) } </pre>

Now, that wouldn’t be a problem, unless of course, until the next time you had to suddenly add a new attribute to the error object. Now you have to go over your entire project, modifying all 30 places. And that would be solved by moving the definition of the error object into a class. That would be great unless of course, you had more than one error object, and again, you’re having to decide which object to instantiate based on some logic only you know. See where I’m trying to get to?

If you were to centralize the logic for creating the error object then all you’d have to do throughout your code would be something like:

<pre class="ql-syntax" spellcheck="false">if(err) { res.json(ErrorFactory.getError(err)) } </pre>

That is it, you’re done, and you never have to change that line again.

Singleton pattern

This one is another oldie but a goodie. It’s quite a simple pattern, mind you, but it helps you keep track of how many instances of a class you’re instantiating. Actually, it helps you keep that number to just one, all of the time. Mainly, the singleton pattern, allows you to instantiate an object once, and then use that one every time you need it, instead of creating a new one without having to keep track of a reference to it, either globally or just passing it as a dependency everywhere.

What does the singleton pattern look like?

Normally, other languages implement this pattern using a single static property where they store the instance once it exists. The problem here is that, as I mentioned before, we don’t have access to static variables in JS. So we could implement this in two ways, one would be by using IIFEs instead of classes.

The other would be by using ES6 modules and having our singleton class using a locally global variable, in which to store our instance. By doing this, the class itself gets exported out of the module, but the global variable remains local to the module.

I know, but trust me, it sounds a lot more complicated than it looks:

<pre class="ql-syntax" spellcheck="false">let instance = null class SingletonClass { constructor() { this.value = Math.random(100) } printValue() { console.log(this.value) } static getInstance() { if(!instance) { instance = new SingletonClass() } return instance } } module.exports = SingletonClass </pre>

And you could use it like this:

<pre class="ql-syntax" spellcheck="false">const Singleton = require(“./singleton”) const obj = Singleton.getInstance() const obj2 = Singleton.getInstance() obj.printValue() obj2.printValue() console.log("Equals:: ", obj === obj2) </pre>

The output of course being:

<pre class="ql-syntax" spellcheck="false">0.5035326348000628 0.5035326348000628 Equals:: true </pre>

Confirming that indeed, we’re only instantiating the object once, and returning the existing instance.

Use cases

When trying to decide if you need a singleton-like implementation or not, you need to consider something: how many instances of your classes will you really need? If the answer is 2 or more, then this is not your pattern.

But there might be times when having to deal with database connections that you might want to consider it.

Think about it, once you’ve connected to your database, it might be a good idea to keep that connection alive and accessible throughout your code. Mind you, this can be solved in a lot of different ways, yes, but this pattern is indeed, one of them.

Using the above example, we can extrapolate it into something like this:

<pre class="ql-syntax" spellcheck="false">const driver = require("...") let instance = null class DBClass { constructor(props) { this.properties = props this._conn = null } connect() { this._conn = driver.connect(this.props) } get conn() { return this._conn } static getInstance() { if(!instance) { instance = new DBClass() } return instance } } module.exports = DBClass </pre>

And now, you’re sure that no matter where you are if you’re using the getInstance method, you’ll be returning the only active connection (if any).

Observer pattern

This one is a very interesting pattern, in the sense that it allows you to respond to certain input by being reactive to it, instead of proactively checking if the input is provided. In other words, with this pattern, you can specify what kind of input you’re waiting for and passively wait until that input is provided in order to execute your code. It’s a set and forget kind of deal, if you will.

In here, the observers are your objects, which know the type of input they want to receive and the action to respond with, these are meant to “observe” another object and wait for it to communicate with them.

The observable, on the other hand, will let the observers know when a new input is available, so they can react to it, if applicable. If this sounds familiar, it’s because it is, anything that deals with events in Node is implementing this pattern.

What does the observer pattern look like?

Have you ever written your own HTTP server? Something like this:

<pre class="ql-syntax" spellcheck="false">const http = require('http'); const server = http.createServer((req, res) => { res.statusCode = 200; res.setHeader('Content-Type', 'text/plain'); res.end('Your own server here'); }); server.on('error', err => { console.log(“Error:: “, err) }) server.listen(3000, '127.0.0.1', () => { console.log('Server up and running'); }); </pre>

There, hidden in the above code, you’re looking at the observer pattern in the wild. An implementation of it, at least. Your server object would act as the observable, whilst your callback function is the actual observer. The event-like interface here (see the bolded code), with the on method, and the event name there might obfuscate the view a bit, but consider the following implementation:

<pre class="ql-syntax" spellcheck="false">class Observable { constructor() { this.observers = {} } on(input, observer) { if(!this.observers[input]) this.observers[input] = [] this.observers[input].push(observer) } triggerInput(input, params) { this.observers[input].forEach( o => { o.apply(null, params) }) } } class Server extends Observable { constructor() { super() } triggerError() { let errorObj = { errorCode: 500, message: 'Port already in use' } this.triggerInput('error', [errorObj]) } } </pre>

You can now, again, set the same observer, in exactly the same way:

<pre class="ql-syntax" spellcheck="false">server.on('error', err => { console.log(“Error:: “, err) }) </pre>

And if you were to call the triggerError method (which is there to show you how you would let your observers know that there is new input for them), you’d get the exact same output:

<pre class="ql-syntax" spellcheck="false">Error:: { errorCode: 500, message: 'Port already in use' } </pre>

If you were to be considering using this pattern in Node.js, please look at the EventEmitter object first, since it’s Node.js’ own implementation of this pattern, and might save you some time.

Use cases

This pattern is, as you might have already guessed, great for dealing with asynchronous calls, since getting the response from an external request can be considered a new input. And what do we have in Node.js, if not a constant influx of asynchronous code into our projects? So next time you’re having to deal with an async scenario consider looking into this pattern.

Another widely spread use case for this pattern, as you’ve seen, is that of triggering particular events. This pattern can be found on any module that is prone to having events triggered asynchronously (such as errors or status updates). Some examples are the HTTP module, any database driver, and even socket.io, which allows you to set observers on particular events triggered from outside your own code.

Chain of responsibility

The chain of responsibility pattern is one that many of use in the world of Node.js have used, without even realizing it.

It consists of structuring your code in a way that allows you to decouple the sender of a request with the object that can fulfill it. In other words, having object A sending request R, you might have three different receiving objects R1, R2, and R3, how can A know which one it should send R to? Should A care about that?

The answer to the last question is: no, it shouldn’t. So instead, if A shouldn’t care about who’s going to take care of the request, why don’t we let R1, R2 and R3 decide by themselves?

Here is where the chain of responsibility comes into play, we’re creating a chain of receiving objects, which will try to fulfill the request and if they can’t, they’ll just pass it along. Does it sound familiar yet?

What does the chain of responsibility look like?

Here is a very basic implementation of this pattern, as you can see at the bottom, we have four possible values (or requests) that we need to process, but we don’t care who gets to process them, we just need, at least, one function to use them, hence we just send it to the chain and let each one decide whether they should use it or ignore it.

<pre class="ql-syntax" spellcheck="false">function processRequest(r, chain) { let lastResult = null let i = 0 do { lastResult = chain[i](r) i++ } while(lastResult != null && i < chain.length) if(lastResult != null) { console.log("Error: request could not be fulfilled") } } let chain = [ function (r) { if(typeof r == 'number') { console.log("It's a number: ", r) return null } return r }, function (r) { if(typeof r == 'string') { console.log("It's a string: ", r) return null } return r }, function (r) { if(Array.isArray(r)) { console.log("It's an array of length: ", r.length) return null } return r } ] processRequest(1, chain) processRequest([1,2,3], chain) processRequest('[1,2,3]', chain) processRequest({}, chain) </pre>

The output being:

<pre class="ql-syntax" spellcheck="false">It's a number: 1 It's an array of length: 3 It's a string: [1,2,3] Error: request could not be fulfilled </pre>

Use cases

The most obvious case of this pattern in our ecosystem is the middlewares for ExpressJS. With that pattern, you’re essentially setting up a chain of functions (middlewares) that evaluate the request object and decide to act on it or ignore it. You can think of that pattern as the asynchronous version of the above example, where instead of checking if the function returns a value or not, you’re checking what values are passed to the next callback they call.

<pre class="ql-syntax" spellcheck="false">var app = express(); app.use(function (req, res, next) { console.log('Time:', Date.now()); next(); //call the next function on the chain }); </pre>

Middlewares are a particular implementation of this pattern since instead of only one member of the chain fulfilling the request, one could argue that all of them could do it. Nevertheless, the rationale behind it is the same.

Final thoughts

These are but a few patterns that you might run into daily without even realizing it. I’d encourage you to look into the rest of them, even if you don’t find an immediate use case, now that I’ve shown you how some of them look in the wild, you might start seeing them yourselves! Hopefully, this article has shed some light on this subject and helps you improve your coding-foo faster than ever. See you on the next one!

Further Reading

New Node.js 12 features will see it disrupt AI, IoT and more surprising areas

How to Build a URL Shortener With Node.js and MongoDB

Google’s Go Essentials For Node.js / JavaScript Developers

The Complete Node.js Developer Course (3rd Edition)

_Originally published by _Fernando Doglio atblog.logrocket.com

=============================

Thanks for reading If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter