Julie  Donnelly

Julie Donnelly

1599902820

The Ultimate Guide to Integration Testing With Docker-Compose and SQL

Updated 9/5/2020

In the previous versions of this article, I’ve stated that theseed container doesn’t have to wait for thedb container. That turned out not to be the case. The article has been corrected. If you prefer looking at code — here’s a sample repo that uses everything described below.

karpikpl/tests-with-docker-compose

Azure functions node-js project generated using yeoman Requirements: Nice to have: docker and docker-compose As with…

github.com

Integration tests are hard

I’ve been toying with the idea of doing integration tests with docker-compose for a while and that time has finally come :)

This guide should apply to any application that is stateful, with a data store that can be dockerized (not Cosmos DB).

I want a one-script solution — no manual steps. It needs to run both locally and on the continuous integration server (Azure DevOps in my case).

The tests that I want to add are calling a node.js azure function that talks with MS SQL server and returning data from the database.

Image for post

Docker compose file that we’re building

The Application

The application consists of two layers — azure function running node.js and MSSQL 2019 Server. There’s also some test data I want to push to the DB before tests are executed.

The issue

While adding docker-compose for an app is fairly easy, and there’s plenty of great guides and tutorials out there, docker-compose is not designed to be an orchestration tool for setting up the DB, then seeding it with data, starting the app, and executing tests. In production scenarios, applications should be resilient to any of its dependencies being not available, so integration testing is somewhat an edge case. We really just want the test to wait until everything is ready, execute, and give us the results.

Materials

Before I show you the solution — here are some great materials that I used to assemble this post.

Integration Testing With Docker Compose

Integration testing is often a difficult venture, especially when it comes to distributed systems. Even if you’re…

blog.harrison.dev

How can I wait for a container to exit? · Issue #5007 · docker/compose

I have a scenario where a container should only be “kept running” in development. In production, it just compile some…

github.com

#docker-compose #integration-testing #azure-functions #docker #nodejs

What is GEEK

Buddha Community

The Ultimate Guide to Integration Testing With Docker-Compose and SQL
Cayla  Erdman

Cayla Erdman

1594369800

Introduction to Structured Query Language SQL pdf

SQL stands for Structured Query Language. SQL is a scripting language expected to store, control, and inquiry information put away in social databases. The main manifestation of SQL showed up in 1974, when a gathering in IBM built up the principal model of a social database. The primary business social database was discharged by Relational Software later turning out to be Oracle.

Models for SQL exist. In any case, the SQL that can be utilized on every last one of the major RDBMS today is in various flavors. This is because of two reasons:

1. The SQL order standard is genuinely intricate, and it isn’t handy to actualize the whole standard.

2. Every database seller needs an approach to separate its item from others.

Right now, contrasts are noted where fitting.

#programming books #beginning sql pdf #commands sql #download free sql full book pdf #introduction to sql pdf #introduction to sql ppt #introduction to sql #practical sql pdf #sql commands pdf with examples free download #sql commands #sql free bool download #sql guide #sql language #sql pdf #sql ppt #sql programming language #sql tutorial for beginners #sql tutorial pdf #sql #structured query language pdf #structured query language ppt #structured query language

Julie  Donnelly

Julie Donnelly

1599902820

The Ultimate Guide to Integration Testing With Docker-Compose and SQL

Updated 9/5/2020

In the previous versions of this article, I’ve stated that theseed container doesn’t have to wait for thedb container. That turned out not to be the case. The article has been corrected. If you prefer looking at code — here’s a sample repo that uses everything described below.

karpikpl/tests-with-docker-compose

Azure functions node-js project generated using yeoman Requirements: Nice to have: docker and docker-compose As with…

github.com

Integration tests are hard

I’ve been toying with the idea of doing integration tests with docker-compose for a while and that time has finally come :)

This guide should apply to any application that is stateful, with a data store that can be dockerized (not Cosmos DB).

I want a one-script solution — no manual steps. It needs to run both locally and on the continuous integration server (Azure DevOps in my case).

The tests that I want to add are calling a node.js azure function that talks with MS SQL server and returning data from the database.

Image for post

Docker compose file that we’re building

The Application

The application consists of two layers — azure function running node.js and MSSQL 2019 Server. There’s also some test data I want to push to the DB before tests are executed.

The issue

While adding docker-compose for an app is fairly easy, and there’s plenty of great guides and tutorials out there, docker-compose is not designed to be an orchestration tool for setting up the DB, then seeding it with data, starting the app, and executing tests. In production scenarios, applications should be resilient to any of its dependencies being not available, so integration testing is somewhat an edge case. We really just want the test to wait until everything is ready, execute, and give us the results.

Materials

Before I show you the solution — here are some great materials that I used to assemble this post.

Integration Testing With Docker Compose

Integration testing is often a difficult venture, especially when it comes to distributed systems. Even if you’re…

blog.harrison.dev

How can I wait for a container to exit? · Issue #5007 · docker/compose

I have a scenario where a container should only be “kept running” in development. In production, it just compile some…

github.com

#docker-compose #integration-testing #azure-functions #docker #nodejs

Software Testing 101: Regression Tests, Unit Tests, Integration Tests

Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.

Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.

#testing #software testing #regression tests #unit tests #integration tests

Tamia  Walter

Tamia Walter

1596754901

Testing Microservices Applications

The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.

This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?

A Brave New World

Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.

Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.

Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.

Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.

The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.

Keep in mind that there are additional things to focus on when testing microservices.

  • Microservices rely on network communications to talk to each other, so network reliability and requirements must be part of the testing.
  • Automation and infrastructure elements are now added as codes, and you have to make sure that they also run properly when microservices are pushed through the pipeline
  • While containerization is universal, you still have to pay attention to specific dependencies and create a testing strategy that allows for those dependencies to be included

Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.

Contract Testing as an Approach

As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.

Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.

Ways to Test Microservices

It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.

There are several ways to test microservices too, including:

  • Unit testing: Which allows developers to test microservices in a granular way. It doesn’t limit testing to individual microservices, but rather allows developers to take a more granular approach such as testing individual features or runtimes.
  • Integration testing: Which handles the testing of microservices in an interactive way. Microservices still need to work with each other when they are deployed, and integration testing is a key process in making sure that they do.
  • End-to-end testing: Which⁠—as the name suggests⁠—tests microservices as a complete app. This type of testing enables the testing of features, UI, communications, and other components that construct the app.

What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.

#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr

Cayla  Erdman

Cayla Erdman

1596448980

The Easy Guide on How to Use Subqueries in SQL Server

Let’s say the chief credit and collections officer asks you to list down the names of people, their unpaid balances per month, and the current running balance and wants you to import this data array into Excel. The purpose is to analyze the data and come up with an offer making payments lighter to mitigate the effects of the COVID19 pandemic.

Do you opt to use a query and a nested subquery or a join? What decision will you make?

SQL Subqueries – What Are They?

Before we do a deep dive into syntax, performance impact, and caveats, why not define a subquery first?

In the simplest terms, a subquery is a query within a query. While a query that embodies a subquery is the outer query, we refer to a subquery as the inner query or inner select. And parentheses enclose a subquery similar to the structure below:

SELECT 
 col1
,col2
,(subquery) as col3
FROM table1
[JOIN table2 ON table1.col1 = table2.col2]
WHERE col1 <operator> (subquery)

We are going to look upon the following points in this post:

  • SQL subquery syntax depending on different subquery types and operators.
  • When and in what sort of statements one can use a subquery.
  • Performance implications vs. JOINs.
  • Common caveats when using SQL subqueries.

As is customary, we provide examples and illustrations to enhance understanding. But bear in mind that the main focus of this post is on subqueries in SQL Server.

Now, let’s get started.

Make SQL Subqueries That Are Self-Contained or Correlated

For one thing, subqueries are categorized based on their dependency on the outer query.

Let me describe what a self-contained subquery is.

Self-contained subqueries (or sometimes referred to as non-correlated or simple subqueries) are independent of the tables in the outer query. Let me illustrate this:

-- Get sales orders of customers from Southwest United States 
-- (TerritoryID = 4)

USE [AdventureWorks]
GO
SELECT CustomerID, SalesOrderID
FROM Sales.SalesOrderHeader
WHERE CustomerID IN (SELECT [CustomerID]
                     FROM [AdventureWorks].[Sales].[Customer]
                     WHERE TerritoryID = 4)

As demonstrated in the above code, the subquery (enclosed in parentheses below) has no references to any column in the outer query. Additionally, you can highlight the subquery in SQL Server Management Studio and execute it without getting any runtime errors.

Which, in turn, leads to easier debugging of self-contained subqueries.

The next thing to consider is correlated subqueries. Compared to its self-contained counterpart, this one has at least one column being referenced from the outer query. To clarify, I will provide an example:

USE [AdventureWorks]
GO
SELECT DISTINCT a.LastName, a.FirstName, b.BusinessEntityID
FROM Person.Person AS p
JOIN HumanResources.Employee AS e ON p.BusinessEntityID = e.BusinessEntityID
WHERE 1262000.00 IN
    (SELECT [SalesQuota]
    FROM Sales.SalesPersonQuotaHistory spq
    WHERE p.BusinessEntityID = spq.BusinessEntityID)

Were you attentive enough to notice the reference to BusinessEntityID from the Person table? Well done!

Once a column from the outer query is referenced in the subquery, it becomes a correlated subquery. One more point to consider: if you highlight a subquery and execute it, an error will occur.

And yes, you are absolutely right: this makes correlated subqueries pretty harder to debug.

To make debugging possible, follow these steps:

  • isolate the subquery.
  • replace the reference to the outer query with a constant value.

Isolating the subquery for debugging will make it look like this:

SELECT [SalesQuota]
    FROM Sales.SalesPersonQuotaHistory spq
    WHERE spq.BusinessEntityID = <constant value>

Now, let’s dig a little deeper into the output of subqueries.

Make SQL Subqueries With 3 Possible Returned Values

Well, first, let’s think of what returned values can we expect from SQL subqueries.

In fact, there are 3 possible outcomes:

  • A single value
  • Multiple values
  • Whole tables

Single Value

Let’s start with single-valued output. This type of subquery can appear anywhere in the outer query where an expression is expected, like the WHERE clause.

-- Output a single value which is the maximum or last TransactionID
USE [AdventureWorks]
GO
SELECT TransactionID, ProductID, TransactionDate, Quantity
FROM Production.TransactionHistory
WHERE TransactionID = (SELECT MAX(t.TransactionID) 
                       FROM Production.TransactionHistory t)

When you use a MAX() function, you retrieve a single value. That’s exactly what happened to our subquery above. Using the equal (=) operator tells SQL Server that you expect a single value. Another thing: if the subquery returns multiple values using the equals (=) operator, you get an error, similar to the one below:

Msg 512, Level 16, State 1, Line 20
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.

Multiple Values

Next, we examine the multi-valued output. This kind of subquery returns a list of values with a single column. Additionally, operators like IN and NOT IN will expect one or more values.

-- Output multiple values which is a list of customers with lastnames that --- start with 'I'

USE [AdventureWorks]
GO
SELECT [SalesOrderID], [OrderDate], [ShipDate], [CustomerID]
FROM Sales.SalesOrderHeader
WHERE [CustomerID] IN (SELECT c.[CustomerID] FROM Sales.Customer c
INNER JOIN Person.Person p ON c.PersonID = p.BusinessEntityID
WHERE p.lastname LIKE N'I%' AND p.PersonType='SC')

Whole Table Values

And last but not least, why not delve into whole table outputs.

-- Output a table of values based on sales orders
USE [AdventureWorks]
GO
SELECT [ShipYear],
COUNT(DISTINCT [CustomerID]) AS CustomerCount
FROM (SELECT YEAR([ShipDate]) AS [ShipYear], [CustomerID] 
      FROM Sales.SalesOrderHeader) AS Shipments
GROUP BY [ShipYear]
ORDER BY [ShipYear]

Have you noticed the FROM clause?

Instead of using a table, it used a subquery. This is called a derived table or a table subquery.

And now, let me present you some ground rules when using this sort of query:

  • All columns in the subquery should have unique names. Much like a physical table, a derived table should have unique column names.
  • ORDER BY is not allowed unless TOP is also specified. That’s because the derived table represents a relational table where rows have no defined order.

In this case, a derived table has the benefits of a physical table. That’s why in our example, we can use COUNT() in one of the columns of the derived table.

That’s about all regarding subquery outputs. But before we get any further, you may have noticed that the logic behind the example for multiple values and others as well can also be done using a JOIN.

-- Output multiple values which is a list of customers with lastnames that start with 'I'
USE [AdventureWorks]
GO
SELECT o.[SalesOrderID], o.[OrderDate], o.[ShipDate], o.[CustomerID]
FROM Sales.SalesOrderHeader o
INNER JOIN Sales.Customer c on o.CustomerID = c.CustomerID
INNER JOIN Person.Person p ON c.PersonID = p.BusinessEntityID
WHERE p.LastName LIKE N'I%' AND p.PersonType = 'SC'

In fact, the output will be the same. But which one performs better?

Before we get into that, let me tell you that I have dedicated a section to this hot topic. We’ll examine it with complete execution plans and have a look at illustrations.

So, bear with me for a moment. Let’s discuss another way to place your subqueries.

#sql server #sql query #sql server #sql subqueries #t-sql statements #sql