1622587920
Avoiding runtime errors using frontend monitoring toolsAs developers, we always push our limits to test our applications before they go live. As we all know, no matter how far we test, runtime issues are inevitable. Therefore, monitoring every aspect of the application is a must.
This article will introduce 5 different tools to monitor the web application frontends live in the user’s browser.
Comparison of the five tools by author
#front-end-development #monitoring #javascript #react
1598959140
Many enterprises and SaaS companies depend on a variety of external API integrations in order to build an awesome customer experience. Some integrations may outsource certain business functionality such as handling payments or search to companies like Stripe and Algolia. You may have integrated other partners which expand the functionality of your product offering, For example, if you want to add real-time alerts to an analytics tool, you might want to integrate the PagerDuty and Slack APIs into your application.
If you’re like most companies though, you’ll soon realize you’re integrating hundreds of different vendors and partners into your app. Any one of them could have performance or functional issues impacting your customer experience. Worst yet, the reliability of an integration may be less visible than your own APIs and backend. If the login functionality is broken, you’ll have many customers complaining they cannot log into your website. However, if your Slack integration is broken, only the customers who added Slack to their account will be impacted. On top of that, since the integration is asynchronous, your customers may not realize the integration is broken until after a few days when they haven’t received any alerts for some time.
How do you ensure your API integrations are reliable and high performing? After all, if you’re selling a feature real-time alerting, you’re alerts better well be real-time and have at least once guaranteed delivery. Dropping alerts because your Slack or PagerDuty integration is unacceptable from a customer experience perspective.
Specific API integrations that have an exceedingly high latency could be a signal that your integration is about to fail. Maybe your pagination scheme is incorrect or the vendor has not indexed your data in the best way for you to efficiently query.
Average latency only tells you half the story. An API that consistently takes one second to complete is usually better than an API with high variance. For example if an API only takes 30 milliseconds on average, but 1 out of 10 API calls take up to five seconds, then you have high variance in your customer experience. This is makes it much harder to track down bugs and harder to handle in your customer experience. This is why 90th percentile and 95th percentiles are important to look at.
Reliability is a key metric to monitor especially since your integrating APIs that you don’t have control over. What percent of API calls are failing? In order to track reliability, you should have a rigid definition on what constitutes a failure.
While any API call that has a response status code in the 4xx or 5xx family may be considered an error, you might have specific business cases where the API appears to successfully complete yet the API call should still be considered a failure. For example, a data API integration that returns no matches or no content consistently could be considered failing even though the status code is always 200 OK. Another API could be returning bogus or incomplete data. Data validation is critical for measuring where the data returned is correct and up to date.
Not every API provider and integration partner follows suggested status code mapping
While reliability is specific to errors and functional correctness, availability and uptime is a pure infrastructure metric that measures how often a service has an outage, even if temporary. Availability is usually measured as a percentage of uptime per year or number of 9’s.
AVAILABILITY %DOWNTIME PER YEARDOWNTIME PER MONTHDOWNTIME PER WEEKDOWNTIME PER DAY90% (“one nine”)36.53 days73.05 hours16.80 hours2.40 hours99% (“two nines”)3.65 days7.31 hours1.68 hours14.40 minutes99.9% (“three nines”)8.77 hours43.83 minutes10.08 minutes1.44 minutes99.99% (“four nines”)52.60 minutes4.38 minutes1.01 minutes8.64 seconds99.999% (“five nines”)5.26 minutes26.30 seconds6.05 seconds864.00 milliseconds99.9999% (“six nines”)31.56 seconds2.63 seconds604.80 milliseconds86.40 milliseconds99.99999% (“seven nines”)3.16 seconds262.98 milliseconds60.48 milliseconds8.64 milliseconds99.999999% (“eight nines”)315.58 milliseconds26.30 milliseconds6.05 milliseconds864.00 microseconds99.9999999% (“nine nines”)31.56 milliseconds2.63 milliseconds604.80 microseconds86.40 microseconds
Many API providers are priced on API usage. Even if the API is free, they most likely have some sort of rate limiting implemented on the API to ensure bad actors are not starving out good clients. This means tracking your API usage with each integration partner is critical to understand when your current usage is close to the plan limits or their rate limits.
It’s recommended to tie usage back to your end-users even if the API integration is quite downstream from your customer experience. This enables measuring the direct ROI of specific integrations and finding trends. For example, let’s say your product is a CRM, and you are paying Clearbit $199 dollars a month to enrich up to 2,500 companies. That is a direct cost you have and is tied to your customer’s usage. If you have a free tier and they are using the most of your Clearbit quota, you may want to reconsider your pricing strategy. Potentially, Clearbit enrichment should be on the paid tiers only to reduce your own cost.
Monitoring API integrations seems like the correct remedy to stay on top of these issues. However, traditional Application Performance Monitoring (APM) tools like New Relic and AppDynamics focus more on monitoring the health of your own websites and infrastructure. This includes infrastructure metrics like memory usage and requests per minute along with application level health such as appdex scores and latency. Of course, if you’re consuming an API that’s running in someone else’s infrastructure, you can’t just ask your third-party providers to install an APM agent that you have access to. This means you need a way to monitor the third-party APIs indirectly or via some other instrumentation methodology.
#monitoring #api integration #api monitoring #monitoring and alerting #monitoring strategies #monitoring tools #api integrations #monitoring microservices
1598684574
Create a new web app or revamp your existing website?
Every existing website or a web application that we see with an interactive and user-friendly interface are from Front-End developers who ensure that all visual effects come into existence. Hence, to build a visually appealing web app front-end development is required.
At HourlyDeveloper.io, you can Hire FrontEnd Developers as we have been actively working on new frontend development as well as frontend re-engineering projects from older technologies to newer.
Consult with experts: https://bit.ly/2YLhmFZ
#hire frontend developers #frontend developers #frontend development company #frontend development services #frontend development #frontend
1647540000
The Substrate Knowledge Map provides information that you—as a Substrate hackathon participant—need to know to develop a non-trivial application for your hackathon submission.
The map covers 6 main sections:
Each section contains basic information on each topic, with links to additional documentation for you to dig deeper. Within each section, you'll find a mix of quizzes and labs to test your knowledge as your progress through the map. The goal of the labs and quizzes is to help you consolidate what you've learned and put it to practice with some hands-on activities.
One question we often get is why learn the Substrate framework when we can write smart contracts to build decentralized applications?
The short answer is that using the Substrate framework and writing smart contracts are two different approaches.
Traditional smart contract platforms allow users to publish additional logic on top of some core blockchain logic. Since smart contract logic can be published by anyone, including malicious actors and inexperienced developers, there are a number of intentional safeguards and restrictions built around these public smart contract platforms. For example:
Fees: Smart contract developers must ensure that contract users are charged for the computation and storage they impose on the computers running their contract. With fees, block creators are protected from abuse of the network.
Sandboxed: A contract is not able to modify core blockchain storage or storage items of other contracts directly. Its power is limited to only modifying its own state, and the ability to make outside calls to other contracts or runtime functions.
Reversion: Contracts can be prone to undesirable situations that lead to logical errors when wanting to revert or upgrade them. Developers need to learn additional patterns such as splitting their contract's logic and data to ensure seamless upgrades.
These safeguards and restrictions make running smart contracts slower and more costly. However, it's important to consider the different developer audiences for contract development versus Substrate runtime development.
Building decentralized applications with smart contracts allows your community to extend and develop on top of your runtime logic without worrying about proposals, runtime upgrades, and so on. You can also use smart contracts as a testing ground for future runtime changes, but done in an isolated way that protects your network from any errors the changes might introduce.
In summary, smart contract development:
Unlike traditional smart contract development, Substrate runtime development offers none of the network protections or safeguards. Instead, as a runtime developer, you have total control over how the blockchain behaves. However, this level of control also means that there is a higher barrier to entry.
Substrate is a framework for building blockchains, which almost makes comparing it to smart contract development like comparing apples and oranges. With the Substrate framework, developers can build smart contracts but that is only a fraction of using Substrate to its full potential.
With Substrate, you have full control over the underlying logic that your network's nodes will run. You also have full access for modifying and controlling each and every storage item across your runtime modules. As you progress through this map, you'll discover concepts and techniques that will help you to unlock the potential of the Substrate framework, giving you the freedom to build the blockchain that best suits the needs of your application.
You'll also discover how you can upgrade the Substrate runtime with a single transaction instead of having to organize a community hard-fork. Upgradeability is one of the primary design features of the Substrate framework.
In summary, runtime development:
To learn more about using smart contracts within Substrate, refer to the Smart Contract - Overview page as well as the Polkadot Builders Guide.
If you need any community support, please join the following channels based on the area where you need help:
Alternatively, also look for support on Stackoverflow where questions are tagged with "substrate" or on the Parity Subport repo.
Use the following links to explore the sites and resources available on each:
Substrate Developer Hub has the most comprehensive all-round coverage about Substrate, from a "big picture" explanation of architecture to specific technical concepts. The site also provides tutorials to guide you as your learn the Substrate framework and the API reference documentation. You should check this site first if you want to look up information about Substrate runtime development. The site consists of:
Knowledge Base: Explaining the foundational concepts of building blockchain runtimes using Substrate.
Tutorials: Hand-on tutorials for developers to follow. The first SIX tutorials show the fundamentals in Substrate and are recommended for every Substrate learner to go through.
How-to Guides: These resources are like the O'Reilly cookbook series written in a task-oriented way for readers to get the job done. Some examples of the topics overed include:
API docs: Substrate API reference documentation.
Substrate Node Template provides a light weight, minimal Substrate blockchain node that you can set up as a local development environment.
Substrate Front-end template provides a front-end interface built with React using Polkadot-JS API to connect to any Substrate node. Developers are encouraged to start new Substrate projects based on these templates.
If you face any technical difficulties and need support, feel free to join the Substrate Technical matrix channel and ask your questions there.
Polkadot Wiki documents the specific behavior and mechanisms of the Polkadot network. The Polkadot network allows multiple blockchains to connect and pass messages to each other. On the wiki, you can learn about how Polkadot—built using Substrate—is customized to support inter-blockchain message passing.
Polkadot JS API doc: documents how to use the Polkadot-JS API. This JavaScript-based API allows developers to build custom front-ends for their blockchains and applications. Polkadot JS API provides a way to connect to Substrate-based blockchains to query runtime metadata and send transactions.
👉 Submit your answers to Quiz #1
Here you will set up your local machine to install the Rust compiler—ensuring that you have both stable and nightly versions installed. Both stable and nightly versions are required because currently a Substrate runtime is compiled to a native binary using the stable Rust compiler, then compiled to a WebAssembly (WASM) binary, which only the nightly Rust compiler can do.
Also refer to:
👉 Complete Lab #1: Run a Substrate node
Polkadot JS Apps is the canonical front-end to interact with any Substrate-based chain.
You can configure whichever endpoint you want it to connected to, even to your localhost
running node. Refer to the following two diagrams.
👉 Complete Quiz #2
👉 Complete Lab #2: Using Polkadot-JS Apps
Notes: If you are connecting Apps to a custom chain (or your locally-running node), you may need to specify your chain's custom data types in JSON under Settings > Developer.
Polkadot-JS Apps only receives a series of bytes from the blockchain. It is up to the developer to tell it how to decode and interpret these custom data type. To learn more on this, refer to:
You will also need to create an account. To do so, follow these steps on account generation. You'll learn that you can also use the Polkadot-JS Browser Plugin (a Metamask-like browser extension to manage your Substrate accounts) and it will automatically be imported into Polkadot-JS Apps.
Notes: When you run a Substrate chain in development mode (with the
--dev
flag), well-known accounts (Alice
,Bob
,Charlie
, etc.) are always created for you.
👉 Complete Lab #3: Create an Account
You need to know some Rust programming concepts and have a good understanding on how blockchain technology works in order to make the most of developing with Substrate. The following resources will help you brush up in these areas.
You will need familiarize yourself with Rust to understand how Substrate is built and how to make the most of its capabilities.
If you are new to Rust, or need a brush up on your Rust knowledge, please refer to The Rust Book. You could still continue learning about Substrate without knowing Rust, but we recommend you come back to this section whenever in doubt about what any of the Rust syntax you're looking at means. Here are the parts of the Rust book we recommend you familiarize yourself with:
Given that you'll be writing a blockchain runtime, you need to know what a blockchain is, and how it works. The **Web3 Blockchain Fundamental MOOC Youtube video series provides a good basis for understanding key blockchain concepts and how blockchains work.
The lectures we recommend you watch are: lectures 1 - 7 and lecture 10. That's 8 lectures, or about 4 hours of video.
👉 Complete Quiz #3
To know more about the high level architecture of Substrate, please go through the Knowledge Base articles on Getting Started: Overview and Getting Started: Architecture.
In this document, we assume you will develop a Substrate runtime with FRAME (v2). This is what a Substrate node consists of.
Each node has many components that manage things like the transaction queue, communicating over a P2P network, reaching consensus on the state of the blockchain, and the chain's actual runtime logic (aka the blockchain runtime). Each aspect of the node is interesting in its own right, and the runtime is particularly interesting because it contains the business logic (aka "state transition function") that codifies the chain's functionality. The runtime contains a collection of pallets that are configured to work together.
On the node level, Substrate leverages libp2p for the p2p networking layer and puts the transaction pool, consensus mechanism, and underlying data storage (a key-value database) on the node level. These components all work "under the hood", and in this knowledge map we won't cover them in detail except for mentioning their existence.
👉 Complete Quiz #4
In our Developer Hub, we have a thorough coverage on various subjects you need to know to develop with Substrate. So here we just list out the key topics and reference back to Developer Hub. Please go through the following key concepts and the directed resources to know the fundamentals of runtime development.
Key Concept: Runtime, this is where the blockchain state transition function (the blockchain application-specific logic) is defined. It is about composing multiple pallets (can be understood as Rust modules) together in the runtime and hooking them up together.
Runtime Development: Execution, this article describes how a block is produced, and how transactions are selected and executed to reach the next "stage" in the blockchain.
Runtime Develpment: Pallets, this article describes what the basic structure of a Substrate pallet is consists of.
Runtime Development: FRAME, this article gives a high level overview of the system pallets Substrate already implements to help you quickly develop as a runtime engineer. Have a quick skim so you have a basic idea of the different pallets Substrate is made of.
👉 Complete Lab #4: Adding a Pallet into a Runtime
Runtime Development: Storage, this article describes how data is stored on-chain and how you could access them.
Runtime Development: Events & Errors, this page describe how external parties know what has happened in the blockchain, via the emitted events and errors when executing transactions.
Notes: All of the above concepts we leverage on the
#[pallet::*]
macro to define them in the code. If you are interested to learn more about what other types of pallet macros exist go to the FRAME macro API documentation and this doc on some frequently used Substrate macros.
👉 Complete Lab #5: Building a Proof-of-Existence dApp
👉 Complete Lab #6: Building a Substrate Kitties dApp
👉 Complete Quiz #5
Polkadot JS API is the javascript API for Substrate. By using it you can build a javascript front end or utility and interact with any Substrate-based blockchain.
The Substrate Front-end Template is an example of using Polkadot JS API in a React front-end.
👉 Complete Lab #7: Using Polkadot-JS API
👉 Complete Quiz #6: Using Polkadot-JS API
Learn about the difference between smart contract development vs Substrate runtime development, and when to use each here.
In Substrate, you can program smart contracts using ink!.
👉 Complete Quiz #7: Using ink!
A lot 😄
On-chain runtime upgrades. We have a tutorial on On-chain (forkless) Runtime Upgrade. This tutorial introduces how to perform and schedule a runtime upgrade as an on-chain transaction.
About transaction weight and fee, and benchmarking your runtime to determine the proper transaction cost.
There are certain limits to on-chain logic. For instance, computation cannot be too intensive that it affects the block output time, and computation must be deterministic. This means that computation that relies on external data fetching cannot be done on-chain. In Substrate, developers can run these types of computation off-chain and have the result sent back on-chain via extrinsics.
Tightly- and Loosely-coupled pallets, calling one pallet's functions from another pallet via trait specification.
Blockchain Consensus Mechansim, and a guide on customizing it to proof-of-work here.
Parachains: one key feature of Substrate is the capability of becoming a parachain for relay chains like Polkadot. You can develop your own application-specific logic in your chain and rely on the validator community of the relay chain to secure your network, instead of building another validator community yourself. Learn more with the following resources:
Author: substrate-developer-hub
Source Code: https://github.com/substrate-developer-hub/hackathon-knowledge-map
License:
1666004249
Running a business online? Well, I have come across many of you who bend back and forth trying to up their game by hiring a relevant Magento development company and further following the best SEO practices. Fortunately, we are living in an era where you can find a plethora of methods or ways to enhance their SEO procedures. The following post acts as a perfect guide that can assist you in enhancing your SEO tactics/strategies and help you to up your game.
Why is there a need for SEO for eCommerce? 1
How is SEO beneficial for your eCommerce Store? 2
#1 Generate Sustainable Traffic 2
#2 Drives Brand Awareness 3
#3 Amazing Customer Experience 3
#4 Free 3
#5 Capturing the Long Tail Keywords 4
Magento Ecommerce SEO Best Practices 4
#1 Duplicate Content 6
#2 Keeping your store Up-to-Date 7
#3 Enhance the Website Speed 7
#4 Check the Magento SEO URLs 8
#5 Optimize Product Images 9
Final Thoughts 9
Do you think that SEO is a mere method of producing a high amount of traffic? Well, to be precise SEO is way more than you think. Striving hard to compete in today’s insanely tricky and competitive world is a bit kind of a challenge, especially after the COVID pandemic. So what exactly is SEO? Well, it is short-term known for Search Engine Optimization. The process of increasing traffic visiting a website by gaining a higher position in comparison to the previous one. Originally, SEO was supposed to attract as many visitors as possible.
Now, whenever you try surfing the internet to look for specific information, I am sure you must come across several options, some of which are the best while others are not in the Search Engine Result Pages (SERPs). In other words, the high position of the website means more possibility of visitors visiting and becoming your regular customers. Further, I would like to mention the pros of considering SEO for your eCommerce store.
We are residing in an era where there’s a race for increasing visitors, traffic and revenue and an organic search can be pretty much of help here.
Even today, Search Engine Optimization is overlooked as a highly crucial aspect of eCommerce business marketing strategy. However, here below I would like to mention several benefits offered by SEO. Read Away!
One of the obvious benefits of considering SEO is generating a sustainable amount of traffic. There are several methods available such as paid social media, and search engine ads that can certainly assist in driving tons and tons of visitors in an extremely short span of time. You see, maintaining such an impressive amount of traffic is tricky and all the paid methods are pretty expensive. I mean here you have to keep paying the ad provider again and again. And over time, you might have to pay more to drive volume with the increase in competition.
In fact, hereby incorporating the best SEO strategies, you will be able to create a more sustainable level of traffic. You see, most SEO investments tend to happen right at the beginning of the process and can certainly offer long-lasting results. At the same time, it is extremely important to keep the content well up to date and the website keeps on running smoothly. Learn more regaring content marketing.
In today’s scenario, making people well-aware of your brand is very important and SEO offers a great amount of help in doing so. In fact, it turns out to be the sure-shot way for low-cost brand awareness. I mean if your website gets to appear on the very first page of Google, there is a 100% chance for people looking for a particular product ending up stumbling on your website and even buying the product.
Such kinds of rankings can also turn out to be endorsements which do visitors click on and keep the brand in mind while making the final decision. Without a doubt, top ranking is one of the most powerful positions to be in and unlike your competitors you aren’t falling short here.
When you end up ranking on the top of the SERPs, it becomes pretty easy for customers to visit your store. Also, this without a doubt increases the buying chances. In fact, according to several stats, almost all the users don’t even bother looking at the websites that are listed on the second page of Google. So ranking on the first page of Google turns out to be a must.
Now tell me something: do you remember URLs? Of course, not! What we generally do is we simply google things and then view websites or stores ranking on the first page of the search result. Not to mention if the website is already ranked on the first page it means that it is the best in terms of navigation, content quality, high-quality images, and whatnot! As a result, an amazing customer experience can be expected here.
#4 Free
The next advantage of considering SEO for your eCommerce store is that it is available for free. Unlike PPC ads or social ads or search engine ads, you don’t have to pay whether it might work wonders for you or not. All you have to do is follow some of the best SEO practices that earn you a great source of traffic without the requirement of paying any kind of upfront costs. Though you may not spend extravagantly on SEO, you can hire a team of SEO experts at a reasonable price who carry an immense amount of experience ranking eCommerce stores by boosting search engine rankings.
Another crucial advantage of considering SEO for your eCommerce store is that it captures the long-tail keyword. You see 15% of search engine queries are new and unique and a long tail keyword strategy is the implication of these queries within the content and presents it seamlessly that it successfully covers at a higher rate.
Although, eCommerce sites are typically well-structured in such a way that they do end up targeting those long-term searches. For example, first, you look for clothing rather than women’s clothing, then dresses, jumpsuits, pants, jeans, tops, t-shirts, shirts and whatnot! Similarly, an eCommerce store follows the same path, so if you have the keyword “women dresses” Jeans for Women” then the end user can directly land up on your site. Here all you have to do is opt for a robust and scalable SEO strategy.
I can simply go on and on when it comes to the benefits of eCommerce SEO such as building trust, increasing sales, expanding remarketing audiences, increasing site usability, fast loading speed, etc. eCommerce SEO is a growing trend and not making the most of it means you might have a lot to lose. Further, I would like to mention some of the best Magento eCommerce SEO practices to take into account.
When we use the word eCommerce, Magento is something that automatically comes to your mind. Why so? Because Magento and eCommerce are like chocolate and peanut butter, they can make incredible taste when merged. In fact, there are a plethora of reasons such as power, security and customizability why Magento is a go-to platform for retailers planning to strengthen their online presence. Now I am sure you must have come across the term Magento SEO. Well, it is a set of SEO adjustments that are unique and never heard of before. In fact, Magento turns out to be more like a blessing in disguise for SEO as it incorporates some of the most amazing features such as robots.txt file, sitemap.xml, etc.
To come up with a strong technical foundation you need:
● Great URL Structure
● Meta information
● Headings
● Faceted Navigation
● Crawling and Indexing
● Site Speed
● HTTPS
In order to rank your Magento store well, especially through the organic search results, there are certain aspects you must take into consideration. Yes, I am talking about the three pillars:
1. Technology – A strong technical foundation of the website can certainly assist search engines in finding and understanding your site as soon and as effectively as possible.
2. Relevance – Is your content relevant to the search query? For that, you have to make sure that you end up creating content that’s useful and satisfying for the end users.
3. Authority – Try to build trust by adding as many links as you can to your website.
It may quite interest you to know that doing SEO for a typical website is far easier than doing that of an eCommerce store featuring hundreds and thousands of page listings or product listings. Fret not, here down below I would like to mention some of the most intimidating SEO tips that must be taken into account for the betterment of your eCommerce store.
What exactly is duplicate content? Now I am sure you must have seen similar content over the internet. I mean two websites having the same heading, title, paras, images basically everything seems to be extremely identical. Can you spot which one is original and which one is copied? Now what happens is when multiple versions are similar to each other, it becomes way difficult for search engines to distinguish between the two. And that’s the reason why search engines rarely tend to display the duplicate pages in the search engine rankings. You must be wondering whether the duplication can harm SEO. Well, of course, it does and in many ways!
One of the obvious ways is that duplicate content can result in high penalties leading to harming your page rankings and organic traffic. You see search engines in such cases tend to determine which version is more relatable to the query of the audience and then give a specific rank. Though there is no denying the fact that duplicate content can severely dilute link equity and credibility.
I have come across many of you who have this question, can this be visible to the naked eyes? Well, not really because duplication is hidden in the code of the site so you need software to check things precisely. Best practices to combat duplicate content are:
● 301 redirect
● Make use of Meta Robot Tags
● Make relevant changes in Meta Title Tags
● Use Canonical tag
● Eliminate pages
Mainly duplicate content issues occur in pages such as product filtering, product sorting, pagination, the same product in different categories, variation of a similar product and so forth.
Another crucial tip to take into account is keeping your Magento store up to date. Now since you have already developed an intimidating Magento store but not keeping it up-to-date means you won’t get desired results and keep your customers hooked for a long period of time. Of course, Magento development is very crucial but what’s more crucial is to maintain the website. Here’s how!
● Analyze website performance – You have to keep examining the overall performance of the website day in and day out. Fortunately, there are a wide range of magento tools available that may assist you well in analyzing the website. So that you can beware of negative clicks, visits, bounce rate, search queries and whatnot!
● Website speed – With an increasingly short attention span of users, website owners these days must keep severe track of the speed of the website. If the website doesn’t load within three seconds then your customers are more likely to shift to your competitors.
● Regular updates – Fresh and relevant information are favored by all and your end users are certainly not an exception here. So try rewriting your website at regular intervals to keep your users stay relevant and on top of Google.
Here you do have a choice. You can either update things manually through a system upgrade or else seek assistance from a relevant Magento development company which offers seamless maintenance.
The next step that needs to be taken into consideration is enhancing the website speed. The slow loading speed can be quite discouraging and tiresome. Now, first of all, let us understand why this happens in the first place. One of the obvious reasons for slow loading speed include not meeting the system requirements, making use of inappropriate extensions, MySQL, NGINX, and PHP configurations not optimized, disabling caching, use of slow hardware and whatnot!
Further, I would like to mention a few ways through which you can speed up your Magento store.
● Update regularly – When a different version is released then you need to know that it is astronomical. And the same goes for every other technology available. You see developers in the Magento community tend to strive extremely hard to make the latest version secure, robust and scalable to a great extent. So don’t miss out on the big update, you never know it can be way more useful.
● Optimize your database – Databases tend to store data in one location. Now what happens when the data is poorly optimized, it takes way longer to server requests. As a result, this surely reflects in performance.
● Enable Magento Cache Management – Now let’s take a situation where you are sending an invite to around 1000 people. Now when you do it manually, it takes a hell lot of time and effort. How about sending bulk emails too?
○ Create a copy of your site in the local cache
○ Magento returns the copy instead of recreating a new site
Also, make this a regular habit of the site audit. Right from adding the relevant amount of pages to getting notified if the website speed gets slow make use of the website audit tool to identify relevant issues such as broken links, SSL errors, lack of mobile optimization and whatnot!
Another interesting tip to take into account is keeping tabs on Magento SEO URLs and seeing whether they are SEO-friendly or not. One of the most amazing features offered by Magento was that it enabled the end users to edit their product URLs freely. Yes, in other words, it is extremely easy to make relevant changes to all the links, especially the ones which are in regard to the product categories, and CMS pages. In other words, you no longer have to worry about 440 errors or missing content here.
Some of the best and most common examples to consider:
● website.com/category/
● website.com/category/sub-category/
● website.com/category-sub-category/product-name/
Another crucial step to take into account is to optimize product images. By doing so you can conduct better search rankings. Now you must be wondering what image optimization is. Well, it is all about reducing the file size of your image and all this happens without sacrificing quality. Here’s how you can do so!
● Name image descriptively – You will be listing hundreds and thousands of products so don’t keep the default names given by the camera. Try to incorporate relevant keywords so that when the crawler goes through your file names, it finds them relevant. Take a deep look at the website analytics and relevant keyword patterns and then make crucial decisions here.
● Choosing image dimensions – Now since you are creating an eCommerce store, of course, you have to show the product from different angles. For example, the back, the front, the interiors, engines and whatnot! In addition, do not forget to add a relevant description so that potential users end up landing on your website.
● Reduce the file size – As mentioned earlier, the attention span is pretty much less and your eCommerce store not loading in a span of 3 seconds means you are losing out on your potential customers. It may also interest you to know that Google uses page load time as one of the crucial ranking factors.
And that’s all for now!
Though SEO is one of the most conventional and traditional approaches, there are numerous new and quick options such as paid ads, social media ads, and emails, that can offer immediate returns. However, on the contrary, investing or hiring a reliable SEO agency might not provide immediate returns but it certainly can be a slow yet sustainable path to achieve growth in the long run. Take certain aspects such as keyword selection, content creation, and tech SEO in mind and do try investing in opportunities that are within your reach.
It doesn’t matter whether you are a techie or a non-techie, but you have to be realistic knowing that SEO strategy won’t be an overnight success – it may take weeks, months and even a year but have faith that one day your eCommerce store will be found on the top search rankings. Stay motivated, and keep going because that’s the only way we move forward.
I hope you did find the following post meaningful. If so feel free to share this among your peers and do help us in spreading the word.
Original Source: [HERE]
1659727800
En este artículo, aprendamos sobre Hangfire en ASP.NET Core 3.1 y cómo integrarlo con sus aplicaciones principales. Una tarea de programación común a la que nos enfrentamos regularmente es ejecutar trabajos en segundo plano. Y ejecutar estos trabajos correctamente sin estropear su código no es una tarea fácil, pero tampoco es difícil. Solía trabajar con los servicios de Windows para programar varias tareas dentro de mi aplicación C#. Luego, me encontré con esta biblioteca casi increíble: Hangfire, y nunca me fui.
Básicamente, los trabajos en segundo plano son aquellos métodos o funciones que pueden tardar mucho tiempo en ejecutarse (cantidad de tiempo desconocida). Estos trabajos, si se ejecutan en el subproceso principal de nuestra aplicación, pueden o no bloquear la interacción del usuario y puede parecer que nuestra aplicación .NET Core se ha bloqueado y no responde. Esto es bastante crítico para las aplicaciones orientadas al cliente. Por lo tanto, tenemos trabajos en segundo plano, similares a los subprocesos múltiples, estos trabajos se ejecutan en otro subproceso, lo que hace que nuestra aplicación parezca bastante asíncrona.
También deberíamos tener la posibilidad de programarlos en un futuro cercano para que esté completamente automatizado. La vida de un desarrollador sería muy dura sin estas increíbles posibilidades.
Hangfire es una biblioteca de código abierto que permite a los desarrolladores programar eventos en segundo plano con la mayor facilidad. Es una biblioteca altamente flexible que ofrece varias funciones necesarias para hacer que la tarea de programación de trabajos sea pan comido. Hangfire en ASP.NET Core es la única biblioteca que no puede perderse.
Para este tutorial, tengamos un escenario específico para que podamos explicar Hangfire y su potencial completo. Digamos que estamos desarrollando una API que se encarga de enviar correos al Usuario para diferentes escenarios. Tiene más sentido explicar Hangfire de esta manera. Hangfire es una de las bibliotecas más fáciles de adaptar, pero también muy poderosa. Es uno de los paquetes que ayuda por completo a crear aplicaciones de forma asíncrona y desacoplada.
Como mencioné anteriormente, Hangfire usa una base de datos para almacenar los datos del trabajo. Usaremos la base de datos del servidor MSSQL en esta demostración. Hangfire crea automáticamente las tablas requeridas durante la primera ejecución.
Comenzaremos creando un nuevo proyecto ASP.NET Core con la plantilla API seleccionada. Ahora cree un controlador de API vacío. Llamémoslo HangfireController. Estoy usando Visual Studio 2019 Community como mi IDE y POSTMAN para probar las API.
Instalando el único paquete que necesitarías para configurar Hangfire.
Install-Package Hangfire
Una vez que haya instalado el paquete, ahora estamos listos para configurarlo para que sea compatible con nuestra aplicación ASP.NET Core API. Este es un paso bastante sencillo, además, una vez que instale el paquete, se le mostrará un Léame rápido que le muestra el paso para completar la configuración.
Navigate to Startup.cs / ConfigureServices so that it looks like the below code snippet.
public void ConfigureServices(IServiceCollection services)
{
services.AddHangfire(x => x.UseSqlServerStorage("<connection string>"));
services.AddHangfireServer();
services.AddControllers();
}
Explicación.
Línea #3 Agrega el servicio Hangfire a nuestra aplicación. También hemos mencionado el almacenamiento que se utilizará, el servidor MSSQL, junto con la cadena/nombre de conexión.
La línea n.º 4 en realidad enciende el servidor Hangfire, que es responsable del procesamiento de trabajos.
Una vez hecho esto, vayamos al método Configurar y agregue la siguiente línea.
app.UseHangfireDashboard("/mydashboard");
Explicación.
Lo que hace esta línea es que nos permite acceder al panel de hangfire en nuestra aplicación ASP.NET Core. El tablero estará disponible yendo a /URL de mi tablero. Iniciemos la aplicación.
Cuando inicia su aplicación ASP.NET Core por el momento, Hangfire verifica si tiene un esquema de Hangfire asociado disponible en su base de datos. Si no, creará un montón de tablas para ti. Así es como se vería su base de datos.
Después de cargar la aplicación, vaya a <localhost>/mydashboard. Podrá ver el panel de control de Hangfire.
Desde el tablero podrás monitorear los trabajos y sus estados. También le permite activar manualmente los trabajos disponibles. Esta es la característica ÚNICA que diferencia a Hangfire de otros programadores. Tablero incorporado. ¿Cuan genial es eso? La captura de pantalla anterior es la de la descripción general del panel. Exploremos también las otras pestañas.
Todos los trabajos que están disponibles en el almacén de datos (nuestro servidor MSSQL) se enumerarán aquí. Obtendrá una idea completa del estado de cada trabajo (En cola, Exitoso, Procesando, Fallido, etc.) en esta pantalla.
Los trabajos tienden a fallar de vez en cuando debido a factores externos. En nuestro caso, nuestra api intenta enviar un correo al usuario, pero hay un problema de conexión interna, lo que hace que el trabajo no se ejecute. Cuando falla un trabajo, Hangfire continúa intentándolo hasta que pasa. (configurable)
¿Qué sucede si necesita enviar por correo el uso de su factura mensualmente? Esta es la característica principal de Hangfire, trabajos recurrentes. Esta pestaña le permite monitorear todos los trabajos configurados.
Recuerde, al configurar Hangfire en la clase Startup.cs, lo hemos mencionado. services.AddHangfireServer().. Esta es la pestaña donde muestra todos los Hangfire Server activos. Estos servidores son responsables de procesar los trabajos. Digamos que no ha agregado los servicios. AddHangfireServer() en la clase de inicio, aún podría agregar trabajos Hangfire a la base de datos, pero no se ejecutarán hasta que inicie un servidor Hangfire.
Esta es una característica bastante obvia. Dado que el tablero puede exponer datos muy confidenciales como nombres de métodos, valores de parámetros, ID de correo electrónico, es muy importante que protejamos/restringamos este punto final. Hangfire, listo para usar, hace que el tablero sea seguro al permitir solo solicitudes locales. Sin embargo, puede cambiar esto implementando su propia versión de IDashboardAuthorizationFilter . Si ya implementó la Autorización en su API, puede implementarla para Hangfire. Consulte estos pasos para asegurar el tablero.
Los trabajos en segundo plano en ASP.NET Core (o digamos cualquier tecnología) pueden ser de muchos tipos según los requisitos. Repasemos los tipos de trabajo disponibles con Hangfire con la implementación adecuada y la explicación en nuestro proyecto ASP.NET Core API. Vamos a codificar.
Los trabajos de disparar y olvidar se ejecutan solo una vez y casi inmediatamente después de la creación. Crearemos nuestro primer trabajo de fondo. Abre el controlador Hangfire que habíamos creado. Crearemos un punto final POST que dé la bienvenida a un usuario con un correo electrónico (idealmente). Añade estos códigos.
[HttpPost]
[Route("welcome")]
public IActionResult Welcome(string userName)
{
var jobId = BackgroundJob.Enqueue(() => SendWelcomeMail(userName));
return Ok($"Job Id {jobId} Completed. Welcome Mail Sent!");
}
public void SendWelcomeMail(string userName)
{
//Logic to Mail the user
Console.WriteLine($"Welcome to our application, {userName}");
}
Explicación.
La línea #5 almacena JobId en una variable. Puede ver que en realidad estamos agregando un trabajo en segundo plano representado por una función ficticia SendWelcomeMail. El JobId se vuelve a publicar más tarde en el Panel de control de Hangfire. Cree la aplicación y ejecútela. Vamos a probarlo con Postman.
Tenga en cuenta la URL y cómo estoy pasando el nombre de usuario al controlador. Una vez que lo ejecute, obtendrá nuestra respuesta requerida. “Id. de trabajo 2 completado. ¡Correo de bienvenida enviado!”. Ahora veamos el tablero de Hangfire.
En la pestaña Correcto, puede ver el recuento de trabajos completados. También puede ver los detalles de cada trabajo, similar a la captura de pantalla anterior. Todos los parámetros y nombres de funciones se exponen aquí. ¿Quiere volver a ejecutar este trabajo con los mismos parámetros? Presiona el botón Volver a poner en cola. Vuelve a agregar su trabajo en la cola para que Hangfire lo procese. Sucede casi de inmediato.
Ahora, qué pasa si queremos enviar un correo a un usuario, no inmediatamente, sino después de 10 minutos. En tales casos, utilizamos trabajos retrasados. Veamos su implementación, después de lo cual lo explicaré en detalle. En el mismo controlador, agregue estas líneas de código. Es bastante similar a la variante anterior, pero le introducimos un factor de retraso.
[HttpPost]
[Route("delayedWelcome")]
public IActionResult DelayedWelcome(string userName)
{
var jobId = BackgroundJob.Schedule(() => SendDelayedWelcomeMail(userName),TimeSpan.FromMinutes(2));
return Ok($"Job Id {jobId} Completed. Delayed Welcome Mail Sent!");
}
public void SendDelayedWelcomeMail(string userName)
{
//Logic to Mail the user
Console.WriteLine($"Welcome to our application, {userName}");
}
Explicación.
La línea #5 programó el trabajo en un período de tiempo definido, en nuestro caso son 2 minutos. Eso significa que nuestro trabajo se ejecutará 2 minutos después de que Postman haya llamado a la acción. Abramos Postman de nuevo y probemos.
Puede ver que recibimos la respuesta esperada de Postman. Ahora. vuelva rápidamente al Panel de control de Hangfire y haga clic en la pestaña Trabajos/programados. Diría que el trabajo se ejecutará en un minuto. Ahí estás para. Ha creado su primer trabajo programado usando Hangfire con facilidad.
Nuestro Cliente tiene una suscripción a nuestro servicio. Obviamente, tendríamos que enviarle un recordatorio sobre el pago o la factura en sí. Esto llama la necesidad de un trabajo recurrente, donde puedo enviar correos electrónicos a mis clientes mensualmente. Esto es compatible con Hangfire mediante el programa CRON.
¿Qué es CRON? CRON es una utilidad basada en el tiempo que puede definir intervalos de tiempo. Veamos cómo lograr tal requisito.
[HttpPost]
[Route("invoice")]
public IActionResult Invoice(string userName)
{
RecurringJob.AddOrUpdate(() => SendInvoiceMail(userName), Cron.Monthly);
return Ok($"Recurring Job Scheduled. Invoice will be mailed Monthly for {userName}!");
}
public void SendInvoiceMail(string userName)
{
//Logic to Mail the user
Console.WriteLine($"Here is your invoice, {userName}");
}
La línea #5 establece claramente que estamos tratando de agregar/actualizar un trabajo recurrente, que llama a una función tantas veces como lo define el esquema CRON. Aquí enviaremos la factura al cliente mensualmente el primer día de cada mes. Ejecutemos la aplicación y cambiemos a Postman. Estoy ejecutando este código el 24 de mayo de 2020. Según nuestro requisito, este trabajo debe despedirse el 1 de junio de 2020, que es dentro de 7 días. Vamos a ver.
Entonces, esto funcionó. Pasemos a Hangfire Dashboard y vayamos a la pestaña Trabajos recurrentes.
¡Perfecto! Funciona como excepción. Puede pasar por varios esquemas CRON aquí que pueden coincidir con sus requisitos. Aquí hay una pequeña documentación agradable para comprender cómo se usan varias expresiones CRON.
Este es un escenario más complicado. Permítanme tratar de mantenerlo muy simple. Un usuario decide darse de baja de su servicio. Después de que confirme su acción (tal vez haciendo clic en el botón para cancelar la suscripción), nosotros (la aplicación) tenemos que cancelar la suscripción del sistema y enviarle un correo de confirmación después de eso también. Entonces, el primer trabajo es realmente cancelar la suscripción del usuario. El segundo trabajo es enviar un correo confirmando la acción. El segundo trabajo debe ejecutarse solo después de que el primer trabajo se haya completado correctamente. Obtener el escenario?
[HttpPost]
[Route("unsubscribe")]
public IActionResult Unsubscribe(string userName)
{
var jobId = BackgroundJob.Enqueue(() => UnsubscribeUser(userName));
BackgroundJob.ContinueJobWith(jobId, () => Console.WriteLine($"Sent Confirmation Mail to {userName}"));
return Ok($"Unsubscribed");
}
public void UnsubscribeUser(string userName)
{
//Logic to Unsubscribe the user
Console.WriteLine($"Unsubscribed {userName}");
}
Explicación.
Línea #5 El primer trabajo que realmente contiene lógica para eliminar la suscripción del usuario.
Línea #6 Nuestro segundo trabajo que continuará después de que se ejecute el primer trabajo. Esto se hace pasando el Id. de trabajo del trabajo principal a los trabajos secundarios.
Iniciemos la aplicación y vayamos a Postman.
Ahora, vaya al Tablero y verifique el trabajo exitoso. Verá 2 nuevos trabajos ejecutados en el orden exacto que queríamos. Eso es todo por este tutorial. Espero que tengan claro estos conceptos y les resulte fácil integrar Hangfire en las aplicaciones ASP.NET Core.
En esta guía detallada, hemos repasado los conceptos de trabajos en segundo plano, características e implementación de Hangfire en aplicaciones ASP.NET Core y varios tipos de trabajos en Hangfire. El código fuente utilizado para demostrar este tutorial está publicado en GitHub. Te dejo el enlace a continuación para que lo consultes. ¿Tienes experiencia con Hangfire? ¿Tienes alguna consulta/sugerencia? Siéntase libre de dejar a continuación en la sección de comentarios. ¡Feliz codificación!
Fuente: https://codewithmukesh.com/blog/hangfire-in-aspnet-core-3-1/