What is tech debt? | TestFort Blog

What is tech debt? | TestFort Blog

Technical debt or code debt is a metaphor that refers to all consequences that arise due to poorly written code and compromises in the development.

Probably anyone who is connected to software development has heard about technical debt. But both salespeople, account managers and probably even CEOs rarely understand what tech debt means, how to measure technical debt, as well as the necessity to pay it. According to Claranet survey, 48% of 100 IT decision-makers from the UK, say that their non – technical coworkers have no idea about the financial impact of tech debt.

At the same time, the risks of not resolving technical debt must be clear not only to developers but also to other members of the team. This will help to acknowledge the need for changes and provide an understanding of why not every feature/project can be implemented quickly. But the question remains — how to explain tech debt to a non-technical audience?


Technical debt or code debt is a metaphor that refers to all consequences that arise due to poorly written code and compromises in the development. This concept, in particular, includes the extra effort that has to be done to improve the software or to add additional functionality.

Probably the best way to explain the term “technical debt” is with the help of an analogy. “I like very much the definition of technical debt provided by Wikipedia: technical debt can be compared to monetary debt. What does it mean? If technical debt is not repaid, it can accumulate ‘interest’, making it harder to implement changes later on. It’s the best definition that was coined by tech specialist (American programmer Ward Cunningham) to justify refactoring costs in front of the CFO of a company,” says Alex Gostev, Project Portfolio Manager at QArea.

It is quite easy to explain the concept of tech debt to business owners. It is like taking a credit. For example, you can take out a loan to open a new office for the company. But if you take that credit to carry out this plan, the interest occurs. In other words, you will pay more in the long run.

Some metaphoric parallels with real-life can also be a useful tool for explaining tech debt issues to non-tech specialists. We can talk about technical debt as of some mess in the house, for example in the kitchen area. Imagine that you have cooked your dinner you decide to leave the kitchen without cleaning up. You continue to do it every day and at some point the kitchen (read “software”) becomes really chaotic so it is impossible to cook (read “add new features”) there anymore.

Sometimes it is better to explain tech debt with hard numbers rather than expect to persuade people with metaphors. “I’d advise to properly set up an experiment within the company linking refactoring work with positive dynamics in metrics that can be translated into numbers — fewer hours of work for maintaining code base, less re-appearing bugs, the higher amount of customer complaints resolved,” Gostev stresses.

It is also important to mention different reasons for technical debt that can influence its scope or, in some cases, justify its ends.


We can distinguish at least three types of technical debt. Even if developers don’t compromise on quality and try to build future-proof code, the debts can arise involuntarily. This can be provoked by the constant changes in the requirements or the development of the system. So your design turned out to be flawed and you can’t add new features quickly and easily but it wasn’t your fault or decision. In this case, we’re talking about accidental or unavoidable tech debt.

The second type of technical debt is deliberate debt that appears as a result of a well-considered decision. Even if the team understands that there is a right way to write the code and the fast way to write the code it may go with the second one. And very often it makes sense – as in the case with startups aimed at delivering their products to market very quickly in order to outpace their competitors.

And finally, the third type of tech debt refers to the situations when developers didn’t have enough skills or experience to follow the best specific practice what lead to real bad code. The bad code can also appear when developers didn’t take enough time and efforts to understand the system they are working with, miss things or vice versa perform too many changes.


Technical debt is a problem that needs to be addressed, no doubt about that. You can start with a significant level of tech debt due to some reasons but reduce it as you go along by refactoring so the damage will be minimized or eliminated. But if you let the technical debt accumulate, there will be real consequences. For example, tech debt can lead to cash costs such as a need to recruit more people to maintain the system or extra time required to build new features. According to Appian, technical debt consumes 40% of development time and results in higher operational expenses. Tech debt may become the reason for fines due to security breaches or lost sales following the system outages.

Non-cash costs of the tech debt can be pretty harmful to your business too. Tech debt can lead to an inability to complete user-experience improvements or adapt to the changes in the market. And it can result in lower levels of productivity because of systems outages or the necessity to spend a significant amount of time on extracting but not on analyzing the data.

Even the most well-known companies had to tackle tech debt at some point in their evolution. According to Forbes, the Twitter decision to build its platform’s front end on Ruby on Rails eventually made it difficult to optimize search performance and add new features. Twitter responded properly and switched to a Java server, which allowed the company to decrease the search latency significantly. In this way, Twitter paid its tech debt.

It is crucial to be mindful about the risks associated with tech debt, but you should also keep in mind that not all debt necessarily becomes a big problem. “Causes or roots of tech debt should be understood within the team to find out what techniques or approaches produce an inexcusable result. That will be the bad technical debt to fight against. Everything else – we can call it good/soft/tolerable technical debt – can be eliminated out of the equation,” Dmitriy Barbashov, the Chief Technology Officer at QArea notes.

How to Create a simple Blog using Mongodb, Node-js and Express

How to Create a simple Blog using Mongodb, Node-js and Express

In this tutorial articles, we will create a simple blog that blog visitor can read blog posts are available using Mongodb, Node-js and Express


In this tutorial articles, we will create a simple blog that blog visitor can read blog posts are available. This allows us to explore the operations that are common to almost every blog application, retrieving article content from a database.

Now that you know a bit more about the dynamic web application and what you’re going to learn, it’s time to start creating a blogging application project to contain our example.

in this article shows how you can create a “Dynamic Blog” using the NPM Application Generator tool, which you can then populate with website-specific routes, views/render templates, and database calls using mongo cloud on Mlab. In this case, we’ll use the tool to create the framework for our sample blog application, to which we’ll add all the other code step by step needed by the blog. The creating site is extremely simple, requiring only that you invoke the generator on the command line interface with a new project name, optionally also specifying the site’s different template engine and CSS styles generator.

Firstly, you should install the generator tool site-wide using the Node package manager.

=> npm install express-generator -g
Which view engine should I use?

The Express Application Generator enables you to design various famous view/templating engines, including EJS, Hbs, Pug (Jade), Twig, and Vash, although it chooses Jade by default if you don’t specify a view option. Express itself can likewise an extensive number of other templating languages

a] Time to Probability:- If you already have experience with a templating language then it is likely they will be beneficial quicker utilizing that language. If not, then you should consider the relative expectation curve for candidate templating engines.

b] Popularity and activity:- Review the popularity of the performance and whether it has a functional network. It is important to be able to get support for the system when you have issues over the lifetime of the site.

c] Style:– Some template engines format utilize particular markup to show embedded substance inside “customary” HTML, while others develop the HTML utilizing an alternate language structure (for instance, utilizing space and square names). Execution/rendering time.

Creating the project

For our sample blog Application, we’re going to build, we’ll create a project named simple blog tutorial using the Pug, formerly known as “Jade” template library and no CSS stylesheet engine.

Firstly go to command prompt and create a project and then run the Express Application Generator in the command line prompt as shown:

=>  express sampleblog –view=pug
Install NPM dependencies:
=> cd sampleblog && npm install

Run the app:

=>  SET DEBUG=sampleblog:* & npm start
Running the architectural express website

At this point, we have a complete structure of Application. The site doesn’t actually do very much yet, but it’s worth running it to show how it works.

First, we install the NPM dependencies. the following command will fetch all the dependency packages listed in the application package.json file.

=> cd sampleblog

=> npm install

Then run the application :-

=> SET DEBUG=sampleblog:* & npm start

then load application on localhost. The generated application structure:


Let’s now take a look at the Directory structure we just created.

The generated app structure, now that you have installed all dependencies, has the following file structure. The package.json file defines the application dependencies and other starter information. It also defines a startup script that will call the application entry point, the JavaScript files /bin/www. This sets up a portion of the application error dealing with and afterward stacks app.js. The application routes are stored in separate modules under the routes directory folder. The templates are stored under the /views directory folder.

what is package.json?

The package.json file defines all application dependencies and other declaration: The dependencies inside the express package and the package. In addition, we have the following some required packages that are useful in many web applications:

  "name": "sampleblog",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www",
    "devstart": "nodemon ./bin/www"
  "dependencies": {
    "body-parser": "~1.18.2",
    "cookie-parser": "~1.4.3",
    "express": "~4.16.2",
    "debug": "~2.6.9",
    "morgan": "~1.9.0",
    "pug": "~2.0.0-rc.4",
    "serve-favicon": "~2.4.5"
  "devDependencies": {
    "nodemon": "^1.14.11"

1] Body-parser: This parses the body part of an incoming HTTP request and makes it easier to read POST parameters.

2] Cookie-parser: it is used to parse the cookie header data and populate req.cookies provides cookie information.

3] Debug: A small node debugging utility used after node core’s debugging technique.

4] Morgan: it’s HTTP request logger middleware for node app.

The scripts section defines a “start” script for bootstrap the application, which is what we are invoking when we call NPM Start to start the application server. From the script definition, you can see that this actually starts the JavaScript file ./bin/www with node starter. It also defines a “Devstart” script for running application.

what is www file?

In node application /bin/www file is the entry point of application. The first thing this does is require() the “real” application entry point because of it import the express() application object.

 * Module dependencies.

var app = require('../app');
line the require() is a global node function that is used to import modules with objects into the current file.
what is app.js ?

This file creates an express application object and set up the application with various configuration and middleware, and then exports the app from the main module. The following code shows just the parts of the file that create and export the app object to other controllers and routes, using the module.exports syntax.

var express = require('express');
var app = express();
module.exports = app;
Setup for view engine

First, we create the app object using our imported express module and then use it to set up the view engine template. There are two sections to Setting up the engine. First, we set the ‘views’ value to specify the directory where the templates will be stored in the Subfolders. At that point, we set the ‘view engine’ esteem to determine the layout library. in Express module provide different view engine templates. and our application we define pug view engine.

var app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'pug');
setup for static files

In our application import 3rd party libraries, then we use the express.static middleware to get Express to serve all the static files inside the /public directory in the project root.

// other dependencies 
app.use(express.static(path.join(__dirname, 'public')));
Setup route handling chain

we add our route-handling code to the request handling pipeline. The imported code will define specific routes path for the specific controller.

app.use('/', indexRouter);
app.use('/users', usersRouter);
How Routes works?

The route file /routes/dashboard.js is shown below. First, it loads the express module for Initialize the express**.**router object. At the Point, it indicates that object, and in Conclusion, sends out the switch from the module.

var express = require('express');
var router = express.Router();

/* GET users listing. */
router.get('/', function(req, res, next) {
  res.send('respond data');
module.exports = router;

The route defines a callback that will be Call whenever an HTTP GET request with the correct pattern is detected. The matching pattern is the route specified when the module is imported (‘/dashboard’) and whatever is defined in this file (‘/’)

How to Routes works with views ?

The views templates are stored in the /views folder and are given the file extension .pug. The strategy Response.render() is utilized to render a specified layout along with the values of named factors passed in an object, and then send the result as a response. In the code below from **/**routes/dashboard.js you can perceive how that course renders a reaction utilizing the format “dashboard” passing the layout variable “UserName”.

/* GET page. */
router.get('/', function(req, res) {
  res.render('dashboard', { title: 'Express' });

The corresponding template for the above route is given below dashboard.pug

extends layout

block content
  h1= UserName
  p Welcome to #{UserName}

Thanks for Reading Article! I hope this get’s you started app using Node and ExpressJs.

Example Modern Vuepress Blog Theme

Example Modern Vuepress Blog Theme

Example Modern Vuepress Blog Theme


Modern blog theme for VuePress.


  • builtin comments support
  • Sitemap generator support
  • Comments support
  • Auto meta tags
  • Better SEO experience
  • Social Sharing
  • Google Analytics
  • Smooth Scrolling
  • Reading Time
  • Reading Progress
  • PWA Support


yarn add vuepress-theme-modern-blog -D
# OR npm install vuepress-theme-modern-blog -D


// .vuepress/config.js
module.exports = {
  theme: 'modern-blog',
  themeConfig: {
    // Please keep looking down to see the available options.


  • Type: Array<{ text: string, link: string }>
  • Default: undefined


module.exports = {
  themeConfig: {
    nav: [
        text: 'Home',
        link: '/',
        text: 'Archive',
        link: '/archive/',
        text: 'Tags',
        link: '/tag/',



  • Type: Array<{ type: ContactType, link: string }>
  • Default: undefined

Contact information, displayed on the left side of footer.

module.exports = {
  themeConfig: {
    footer: {
      contact: [
          type: 'github',
          link: 'https://github.com/vuejs/vuepress',
          type: 'twitter',
          link: 'https://github.com/vuejs/vuepress',

For now ContactType supports following enums:

  • github
  • facebook
  • twitter
  • instagram
  • linkedin

::: tip Welcome contribution of adding more built-in contact type. :::


Copyright information, displayed on the right side of footer.

module.exports = {
  themeConfig: {
    footer: {
      copyright: [
          text: 'Privacy Policy',
          link: 'https://policies.google.com/privacy?hl=en-US',
          text: 'MIT Licensed | Copyright © 2018-present Vue.js',
          link: '',


A function used to modify the default blog plugin options of @vuepress/plugin-blog. It's common to use it to add apply custom document classifiers. e.g.

module.exports = {
  themeConfig: {
    modifyBlogPluginOptions(blogPlugnOptions) {
      const writingDirectoryClassifier = {
        id: 'writing',
        dirname: '_writings',
        path: '/writings/',
        layout: 'IndexWriting',
        itemLayout: 'Writing',
        itemPermalink: '/writings/:year/:month/:day/:slug',
        pagination: {
          perPagePosts: 5,
      return blogPlugnOptions

Here is the default blog plugin options:

  directories: [
      id: 'post',
      dirname: '_posts',
      path: '/',
      layout: 'IndexPost',
      itemLayout: 'Post',
      itemPermalink: '/:year/:month/:day/:slug',
      pagination: {
        perPagePosts: 5,
      id: 'archive',
      dirname: '_archive',
      path: '/archive/',
      layout: 'IndexArchive',
      itemLayout: 'Post',
      itemPermalink: '/archive/:year/:month/:day/:slug',
      pagination: {
        perPagePosts: 5,
  frontmatters: [
      id: "tag",
      keys: ['tag', 'tags'],
      path: '/tag/',
      layout: 'Tags',
      frontmatter: { title: 'Tags' },
      itemlayout: 'Tag',
      pagination: {
        perPagePosts: 5


  • Type: boolean
  • Default: true

Whether to extract summary from source markdowns.


  • Type: number
  • Default: 200

Set the length of summary.


  • Type: boolean
  • Default: false

Whether to enable PWA support. this option is powered by the official PWA plugin.

if you enable this option, the default options of the internal PWA plugin is as follows:

  serviceWorker: true,
  updatePopup: true


to make this works you need to create a new page and add the proper config to themeConfig.nav then set the layout to AboutLayout in page frontmatter.

  • Type: { fullName: string, bio: string, image: string }
  • Default: undefined


  • Type: string
  • Default: "https://source.unsplash.com/random"


  • Type: string
  • Default: undefined

Disqus website short name check official website


  • Type: boolean
  • Default: false

to enable this plugin you need to define:

  sitemap: true,
  hostname: "https://ahmadmostafa.com/" // your own hostname


  • Type: boolean
  • Default: false

to enable this plugin you need also to define:


  • Type: Array< string >
  • Default: undefined

refer to docs


  socialShare: true,
  socialShareNetworks: ["twitter", "facebook"],


Google analytics tracking ID

  • Type: string
  • Default: undefined


  • Type: string
  • Default: Pagination

Custom the pagination component.

The default is the pagination component powerful by @vuepress/plugin-blog:

You can set this option to SimplePagination to enable another out-of-box simple pagination component:

You can also wirte a custom pagination component and register it as a global component. then pass its name to this option.

Front Matter


  • Type: string|string[]
  • Default: 200


  - JavaScript
  - DOM


Date published

date: 2016-10-20 20:44:40


Author name

author: Ahmad Mostafa


location: Jordan


Post summary

description: some description


title will be shown in the posts list

title: Front Matter in VuePress


header image for the post item

image: https://source.unsplash.com/random
Demo Download

The biggest software failures in recent years | TestFort Blog

The biggest software failures in recent years | TestFort Blog

Two years ago a well-known code collaboration platform GitLab experienced a severe data loss which appeared to be one of the major outages in the IT world.

Everyone who uses modern technologies has encountered errors and software failures. While in most cases the programmers’ mistakes are not too serious, some IT failures can have truly horrific consequences. The other aspect is the price the breached organizations pay. According to the RiskIQ’s report, security breaches alone cost major companies as much as $25 per minute, while crypto-companies may lose almost $2000 a minute due to cybercrime. We have collected some of the most memorable examples of software failures from recent years (with many well-known brands involved) to show how severe the results can be and why preventive measures (such as extensive software testing) are truly required.

Two years ago a well-known code collaboration platform GitLab experienced a severe data loss which appeared to be one of the major outages in the IT world. GitLab originally used only one database server, but decided to test a solution using two servers. Their plan was to copy the data from the production environment to the test environment.

In the process the automatic mechanisms began to remove accounts from the database which were identified as dangerous. As a result of increased traffic, the data copying process began to slow down and then stopped completely due to data discrepancies. To add insult to injury, information from the production database was removed during the copying process.

After several attempts to resume the process, one of the employees decided to delete the test base and start the process again but accidentally deleted the production base. What made things even worse is that the directory holding the copies was empty too — the backups had not been made for a long time due to a configuration error.

What meant to be a standard procedure resulted in an 18-hour outage while the 300 GB of customer data was lost. According to the GitLab’s estimates, the company has lost data on at least 5,000 new projects, 5,000 comments, and 700 users. The company approach to this failure deserves respect. Gitlab explained in detail what happened, broadcasted the restoration procedure on YouTube and published a list of improvements to ensure that this trouble would never happen again. But as they say — the damage is done.

This summer the flag carrier airline of the UK — British Airways — reported an IT system issue that resulted in the delay of hundreds of flights in the UK, while dozens of flights were canceled completely. This failure affected three British airports and thousands of passengers who had to rebook their flights or check-in by using manual systems. Despite the problem being solved, the airports still felt the effect of this failure for a long while before normal service was resumed.

This computer problem at British Airways is just the latest in a series of IT concerns of the airline. Last year British Airways was sentenced to a record fine of 200 million euros for a data breach. This happened because of the cyber-hack which resulted in a website failure compromising the data of 500 thousand customers. British Airways also experienced a massive system failure in 2017, which affected 75,000 passengers and cost the company nearly 80 million pounds.

British Airways is not the only airline that is struggling with programming issues. In 2013 American Airlines had to ground off all its flights because of the computer glitch. And in 2017 the company had over 1,000 flights at risk of cancellation. The plans of many travelers during the holiday season could be ruined because of a single error in the company’s internal scheduling system which gave too many pilots a day off.

When it goes about IT failures, no one is safe. Amazon’s AWS, which is considered to be one of the most reliable hosting services, experienced a serious outage in the eastern coast of the U.S in 2017. The AWS’s infrastructure supports millions of sites, meaning that when the company’s servers go down, it causes a lot of trouble across the internet. It wasn’t a surprise that “major technical difficulties” of ASW had led to the unprecedented problems for hundreds of popular websites.

Many companies of different sizes and from different industries store their data in the data centers of AWS. This includes well-known names such as Netflix, Slack, Business Insider, IFTTT, Nest Trello, Quora, and Splitwise. Many of them were impacted by the outage mentioned above. A lot of websites were completely offline, devices on the Internet of things such as IFTTT lighting controls or Nest thermostats refused to work, Amazon’s assistant Alexa was struggling to stay online, not even Amazon’s own AWS status page worked anymore. This points to one thing – as more and more services rely on AWS good reputation and move their websites to its servers, even small glitches in a single data center become a really big deal.

A vulnerability in Google+ exposed the private information of nearly 500 000 people using the social network between 2015 and March 2018. According to a report by the Wall Street Journal, the major part of the problem was a specific API that might be used to get access to non-public information. Basically, the software glitch allowed outside developers to see the name, email address, their employment status, gender, and age of the network’s users. The error had been discovered in March 2018 and rectified immediately.

The interesting part is — Google did not share the information about the bug in Google+ at once trying not to get into the limelight of the Cambridge Analytica scandal and become noticed by the regulators. At the same time, the WSJ report states, although Google has no evidence of data misuse it also сan’t say there was none. In any case the tech backlash ended sadly for Google+ – the consumer version of the network was shut down shortly afterward.

Last year Facebook, whose ability to handle the private information had been already questioned, confirmed that nearly 50 million accounts could be at risk. Hackers exploited a vulnerability in the system that allowed them to get access to the accounts and possibly to the personal information of Facebook’s users. The attack was detected on September 25, 2018. According to The New York Times sources, 3 software flaws in the network’s systems allowed hackers to access user accounts, including Mark Zuckerberg’s, the CEO of Facebook.

The social network representatives stated that the hackers probably exploited a vulnerability in the “View as” code, the function that allows checking how a profile looks as seen by other people. This, in turn, resulted in the acquiring of authentication tokens, thanks to which the user does not have to log in to the site every time. 90 million users have been logged out of their accounts the day the vulnerability was discovered. Facebook’s representatives explained that 40 additional million accounts had been logged out as a preventative measure. Back then this data breach was the largest in Facebook’s history. According to the new UpGuard’s report, over 540 million records on Facebook users were eventually exposed on Amazon cloud servers.

The cases listed above serve as a reminder of the importance of IT quality assurance of any type of software. They highlight the need of developing an effective approach to testing as a crucial part of the business processes.

The complexity of modern systems is so great that it is usually nearly impossible to perform one particular test and guarantee a perfect result. In most cases, only a combination of manual testing and automated testing allows you to bring a great product to the market. It is important to stress however that the test effort has to be adapted to the priorities of the business. Some modules of the software are often prone to error thus require greater attention of the QA specialists. Testing procedures must be also adapted to the system being tested. Because safety issues are much more critical in some systems than others. The tests must, therefore, be contextual and adapted to the environment.

The testing effort should start as early as possible in the software life cycle. No one will argue that the cost of resolving software bugs in the development process is significantly lower than the cost of resolving issues when the damage (to customer experience and the company’s reputation) is already done. The detailed and effective testing strategy minimizes the likelihood of errors in the end product that can lead to negative consequences for your business.