1671266220
In this article, we’ll look at what software testing is, and why you should care about it. We’ll learn how to design unit tests and how to write Python unit tests. In particular, we’ll look at two of the most used unit testing frameworks in Python, unittest
and pytest
.
Software testing is the process of examining the behavior of a software product to evaluate and verify that it’s coherent with the specifications. Software products can have thousands of lines of code, and hundreds of components that work together. If a single line doesn’t work properly, the bug can propagate and cause other errors. So, to be sure that a program acts as it’s supposed to, it has to be tested.
Since modern software can be quite complicated, there are multiple levels of testing that evaluate different aspects of correctness. As stated by the ISTQB Certified Test Foundation Level syllabus, there are four levels of software testing:
In this article, we’ll talk about unit testing, but before we dig deep into that, I’d like to introduce an important principle in software testing.
Testing shows the presence of defects, not their absence.
In other words, even if all the tests you run don’t show any failure, this doesn’t prove that your software system is bug-free, or that another test case won’t find a defect in the behavior of your software.
This is the first level of testing, also called component testing. In this part, the single software components are tested. Depending on the programming language, the software unit might be a class, a function, or a method. For example, if you have a Java class called ArithmeticOperations
that has multiply
and divide
methods, unit tests for the ArithmeticOperations
class will need to test both the correct behavior of the multiply
and divide
methods.
Unit tests are usually performed by software testers. To run unit tests, software testers (or developers) need access to the source code, because the source code itself is the object under test. For this reason, this approach to software testing that tests the source code directly is called white-box testing.
You might be wondering why you should worry about software testing, and whether it’s worth it or not. In the next section, we’ll analyze the motivation behind testing your software system.
The main advantage of software testing is that it improves software quality. Software quality is crucial, especially in a world where software handles a wide variety of our everyday activities. Improving the quality of the software is still too vague a goal. Let’s try to specify better what we mean by quality of software. According to the ISO/IEC Standard 9126-1 ISO 9126, software quality includes these factors:
If you own a company, software testing is an activity that you should consider carefully, because it can have an impact on your business. For example, in May 2022, Tesla recalled 130,000 cars due to an issue in vehicles’ infotainment systems. This issue was then fixed with a software update distributed “over the air”. These failures cost time and money to the company, and they also caused problems for the customers, because they couldn’t use their cars for a while. Testing software indeed costs money, but it’s also true that companies can save millions in technical support.
Unit testing focuses on checking whether or not the software is behaving correctly, which means checking that the mapping between the inputs and the outputs are all done correctly. Being a low-level testing activity, unit testing helps in the early identification of bugs so that they aren’t propagated to higher levels of the software system.
Other advantages of unit testing include:
Let’s now look at how to design a testing strategy.
Before starting to plan a test strategy, there’s an important question to answer. What parts of your software system do you want to test?
This is a crucial question, because exhaustive testing is impossible. For this reason, you can’t test every possible input and output, but you should prioritize your tests based on the risks involved.
Many factors need to be taken into account when defining your test scope:
Once you define the testing scope, which specifies what you should test and what you shouldn’t test, you’re ready to talk about the qualities that a good unit test should have.
There’s one last step missing before diving deep into unit testing in Python. How do we organize our tests to make them clean and easy to read? We use a pattern called Arrange, Act and Assert (AAA).
The Arrange, Act and Assert pattern is a common strategy used to write and organize unit tests. It works in the following way:
This strategy provides a clean approach to organizing unit tests by separating all the main parts of a test: setup, execution and verification. Plus, unit tests are easier to read, because they all follow the same structure.
We’ll now talk about two different unit testing frameworks in Python. The two frameworks are unittest
and pytest
.
The Python standard library includes the unittest unit testing framework. This framework is inspired by JUnit, which is a unit testing framework in Java.
As stated in the official documentation, unittest
supports a few important concepts that we will mention in this article:
unittest
has its way to write tests. In particular, we need to:
unittest.TestCase
Since unittest
is already installed, we’re ready to write our first unit test!
Let’s say that we have the BankAccount
class:
import unittest
class BankAccount:
def __init__(self, id):
self.id = id
self.balance = 0
def withdraw(self, amount):
if self.balance >= amount:
self.balance -= amount
return True
return False
def deposit(self, amount):
self.balance += amount
return True
We can’t withdraw more money than the deposit availability, so let’s test that this scenario is handled correctly by our source code.
In the same Python file, we can add the following code:
class TestBankOperations(unittest.TestCase):
def test_insufficient_deposit(self):
# Arrange
a = BankAccount(1)
a.deposit(100)
# Act
outcome = a.withdraw(200)
# Assert
self.assertFalse(outcome)
We’re creating a class called TestBankOperations
that’s a subclass of unittest.TestCase
. In this way, we’re creating a new test case.
Inside this class, we define a single test function with a method that starts with test
. This is important, because every test method must start with the word test
.
We expect this test method to return False
, which means that the operation failed. To assert the result, we use a special assertion method called assertFalse()
.
We’re ready to execute the test. Let’s run this command on the command line:
python -m unittest example.py
Here, example.py
is the name of the file containing all the source code. The output should look something like this:
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
Good! This means that our test was successful. Let’s see now how the output looks when there’s a failure. We add a new test to the previous class. Let’s try to deposit a negative amount of money, which of course isn’t possible. Will our code handle this scenario?
This is our new test method:
def test_negative_deposit(self):
# Arrange
a = BankAccount(1)
# Act
outcome = a.deposit(-100)
# Assert
self.assertFalse(outcome)
We can use the verbose mode of unittest
to execute this test by putting the -v
flag:
python -m unittest -v example.py
And the output is now different:
test_insufficient_deposit (example.TestBankOperations) ... ok
test_negative_deposit (example.TestBankOperations) ... FAIL
======================================================================
FAIL: test_negative_deposit (example.TestBankOperations)
----------------------------------------------------------------------
Traceback (most recent call last):
File "example.py", line 35, in test_negative_deposit
self.assertFalse(outcome)
AssertionError: True is not false
----------------------------------------------------------------------
Ran 2 tests in 0.002s
FAILED (failures=1)
In this case, the verbose flag gives us more information. We know that the test_negative_deposit
failed. In particular, the AssertionError
tells us that the expected outcome was supposed to be false
but True is not false
, which means that the method returned True
.
The unittest
framework provides different assertion methods, based on our needs:
assertEqual(x,y)
, which tests whether x == y
assertRaises(exception_type)
, which checks if a specific exception is raisedassertIsNone(x)
, which tests if x is None
assertIn(x,y)
, which tests if x in y
Now that we have a basic understanding of how to write unit tests using the unittest
framework, let’s have a look at the other Python framework called pytest
.
The pytest
framework is a Python unit testing framework that has a few relevant features:
unittest
test suitesSince pytest
isn’t installed by default, we have to install it first. Note that pytest
requires Python 3.7+.
Installing pytest
is quite easy. You just have to run this command:
pip install -U pytest
Then check that everything has been installed correctly by typing this:
pytest --version
The output should look something like this:
pytest 7.1.2
Good! Let’s write the first test using pytest
.
We’ll use the BankAccount
class written before, and we’ll test the same methods as before. In this way, it’s easier to compare the effort needed to write tests using the two frameworks.
To test with pytest
we need to:
test_
or end with _test.py
. pytest
will look for those files in the current directory and its subdirectories.So, we create a file called test_bank.py
and we put it into a folder. This is what our first test function looks like:
def test_insufficient_deposit():
# Arrange
a = BankAccount(1)
a.deposit(100)
# Act
outcome = a.withdraw(200)
# Assert
assert outcome == False
As you have noticed, the only thing that changed with respect to the unittest
version is the assert section. Here we use plain Python assertion methods.
And now we can have a look at the test_bank.py
file:
class BankAccount:
def __init__(self, id):
self.id = id
self.balance = 0
def withdraw(self, amount):
if self.balance >= amount:
self.balance -= amount
return True
return False
def deposit(self, amount):
self.balance += amount
return True
def test_insufficient_deposit():
# Arrange
a = BankAccount(1)
a.deposit(100)
# Act
outcome = a.withdraw(200)
# Assert
assert outcome == False
To run this test, let’s open a command prompt inside the folder where the test_bank.py
file is located. Then, run this:
pytest
The output will be something like this:
======== test session starts ========
platform win32 -- Python 3.7.11, pytest-7.1.2, pluggy-0.13.1
rootdir: \folder
plugins: anyio-2.2.0
collected 1 item
test_bank.py . [100%]
======== 1 passed in 0.02s ========
In this case, we can see how easy it is to write and execute a test. Also, we can see that we wrote less code compared to unittest
. The result of the test is also quite easy to understand.
Let’s move on to see a failed test!
We use the second method we wrote before, which is called test_negative_deposit
. We refactor the assert section, and this is the result:
def test_negative_deposit():
# Arrange
a = BankAccount(1)
# Act
outcome = a.deposit(-100)
# Assert
assert outcome == False
We run the test in the same way as before, and this should be the output:
======= test session starts =======
platform win32 -- Python 3.7.11, pytest-7.1.2, pluggy-0.13.1
rootdir: \folder
plugins: anyio-2.2.0
collected 2 items
test_bank.py .F [100%]
======= FAILURES =======
_____________ test_negative_deposit _____________
def test_negative_deposit():
# Arrange
a = BankAccount(1)
# Act
outcome = a.deposit(-100)
# Assert
> assert outcome == False
E assert True == False
test_bank.py:32: AssertionError
======= short test summary info =======
FAILED test_bank.py::test_negative_deposit - assert True == False
======= 1 failed, 1 passed in 0.15s =======
By parsing the output, we can read collected 2 items
, which means that two tests have been executed. Scrolling down, we can read that a failure occurred while testing the test_negative_deposit
method. In particular, the error occurred when evaluating the assertion. Plus, the report also says that the value of the outcome
variable is True
, so this means that the deposit
method contains an error.
Since pytest
uses the default Python assertion keyword, we can compare any output we get with another variable that stores the expected outcome. All of this without using special assertion methods.
To wrap it up, in this article we covered the basics of software testing. We discovered why software testing is essential and why everyone should test their code. We talked about unit testing, and how to design and implement simple unit tests in Python.
We used two Python frameworks called unittest
and pytest
. Both have useful features, and they’re two of the most-used frameworks for Python unit testing.
In the end, we saw two basic test cases to give you an idea of how tests are written following the Arrange, Act and Assert pattern.
I hope I’ve convinced you of the importance of software testing. Choose a framework such as unittest
or pytest
, and start testing — because it’s worth the extra effort!
If you enjoyed this article, you might also find the following useful:
Original article source at: https://www.sitepoint.com/
1670564880
In this article we will Learn How to Create API Test Automation Framework using PyTest. APIs are an integral part of software development in any small/mid/large scale application. This means that testing of these APIs will dramatically improve the efficiency of the entire application.
There are several benefits to API Testing, including:
We'll now learn to create an API Test Automation Framework from scratch.
You'll need a basic understanding of REST API, HTTP methods, and response codes.
Let's get started!
Check the version of Python installed on your machine:
Install and check the version of pip on your machine: pip.
Install requests library
pip install requests
Creation of Project Architecture
You can follow the above project architecture.
The test_service1.py module will contain the 'test cases.'
Methods are defined as:
def test_<testcasename>:
They are considered test cases.
For example:
def test_service1(self):
payload_json = json.dumps(payload_service1.data, default=str)
request = HttpRequest()
log.info("the API call is : " + self.URL + "<endpoint>")
headers = {"Content-Type": "application/json; charset=utf-8", 'token': self.token,
}
response = request.send_request_with_data(self.URL + "<endpoint>",
headers=headers, data=payload_json)
response_code = response.status_code
if response_code != 204:
data = json.loads(response.text)
log.info('The Custom status code is ' + str(data['statusCode']))
assert int(data['statusCode']) == 200
expected_response = json.dumps(service1_expected_response.expected)
expected = json.loads(expected_response)
for key in data.keys():
if key not in expected:
assert False, "Key mismatch"
log.info('The Custom status code is ' + str(data['statusCode']))
assert int(data['statusCode']) == 200
def get_token(request,url):
payload = {
"Key": Constants.KEY,
}
headers = {"Content-Type": "application/json; charset=utf-8"}
requests = HttpRequest()
log.info("The API call is : " + url + "<endpoint>")
response = requests.send_request_with_data(url + "<endpoint>", json=payload,
headers=headers, redirects=False)
response_code = response.status_code
log.info("The status code is " + str(response_code))
response = json.loads(response.text)
token = response['Token']
if request.cls is not None:
request.cls.token = token
request.cls.URL = url
yield token
requestHelpers.py
This module shall contain the helper functions required for the API requests. You can import the request module to create the helper functions. For example,
class HttpRequest(object):
session = None
def __init__(self):
self.session = requests.session()
def send_request_with_data(self, url, data=None, json=None, headers=None, redirects=True):
try:
conn = self.session.post(url, headers=headers, json=json, data=data, verify=False, allow_redirects=redirects, timeout=600)
except Exception as exception:
raise exception
return conn
requirements.txt
This file shall contain all the modules which are a pre-requisite for the creation of this Test automation framework.
pytest
requests
pytest-logger
pytest-html
allure-pytest
Use of Markers:
@pytest.mark.service1
Markers can be used to execute specific sets of test cases. For example, if you want to execute the tests specific to service1, then you can apply the custom marker using the above command.
Markers can be executed using the command :
pytest -m "marker_name"
You can register your custom markers by creating a pytest.ini file. The contents of the file can be:
[pytest]
markers =
marker1: description
We'll use the allure reporting tool for our API Automation tests.
Follow the below-mentioned steps to set up the allure reporting:
pip install allure-pytest
For generating report in custom folder
pytest test.py --alluredir=<path_to_custom_folder>
Once the test are executed, you can go to the Custom Report folder and view the report using:
allure serve <Report_folder_name>
There's much more to API test automation than just creating the framework architecture. I'll write more about the specific insights of each useful package/module in this framework in the upcoming write-ups.
Original article sourced at: https://dzone.com
1670364120
In this Pytest tutorial we will learn about How to use Pytest Fixtures with examples. Pytest has a number of great features. One of those special features is fixtures. Using pytest fixtures to test your application is one way you can exponentially increase code quality. Higher-quality code, plus more readable documentation, leads to a massive reduction in the cost of resources for our applications.
Pytest is one of the most popular testing modules for Python. Pytest is used for Python API test cases, database projects, artificial intelligence, and even for blockchain applications. Furthermore, pytest and its features, like fixtures, are highly configurable and doesn’t have much boilerplate. Having the ability to use pytest with fixtures alone can create a career path for any talented Python developer.
In this step-by-step guide, we’ll quickly go through how to set up pytest with fixtures. We’ll also go into detail into the different types of fixtures, with examples. By the end, you should have a good idea of how fixtures work in pytest.
Pytest fixtures are functions that can be used to manage our apps states and dependencies. Most importantly, they can provide data for testing and a wide range of value types when explicitly called by our testing software. You can use the mock data that fixtures create across multiple tests.
@pytest.fixture
def one():
return 1
Fixtures are very flexible and have multiple uses cases. Since Python is an object-oriented programming language, we can parse different types of objects to be used as test data such as integers, strings, lists, dictionaries, booleans, classes, floats, and other complex numbers.
And did I mention that it’s free and open source? Pytest also has over 900 plugins for developers to use. You can see a complete list of them here.
Now that you know what pytest fixtures are, let’s see how to use them. First, you need to have pytest installed on your machine.
Pytest can be installed on most Python environments, Jupyter Notebook, and Colab. The following guide assumes you’re installing it on a local machine.
virtualenv -p python3 folder_name
Start your local virtual environment. Replace pytest_example with your project folder’s name.
virtualenv -p python3 pytest_example
After creating the virtual environment, move into the new directory and activate it.
cd pytest_example
.\Scripts\activate
Run the following command to make sure that pytest is installed in your system:
pip install pytest
Create at least one pytest file. Keep in mind that both methods and test files need the test_ prefix or _test suffix to be recognized as a test file.
test_file_name.py reads better than file_name_test.py. Pytest will not run file_name.py or test_file_name.py.
Import the pytest module in all Python files you want to test, as well as in any associated configuration files for the fixtures.
Import pytest
We have to indicate that the function is a fixture with @pytest.fixture. These specific Python decorations let us know that the next method is a pytest fixture.
@pytest.fixture
Implementing the simplest pytest fixture can just return an object, like an integer.
@pytest.fixture
def one():
return 1
Either in the same file or a different test file, we can create tests that request the fixtures needed. This is how we can test their assertions. These test methods need to start with the prefix test_.
def test_we_are(one):
assert one == 1
Finally, tell pytest to test your code. Pytest will test all test files in the current directory and subdirectories with the correct prefix or suffix.
pytest
Adding -v gives the results more verbosity and detail to our tests. We can now see which specific tests have failed or passed.
pytest -v
When tests are run, which tests fail or pass are clearly labeled. Our asserts test whether a certain logic statement is true or not. AssertionError means that the assertion is false, and that’s why the test has failed. These play a particularly important role in testing for bugs in our system. Other errors we come across will be coding errors and bad naming conventions in our test code.
To make it easier on ourselves, we can define a fixture to use across multiple test files. These fixtures are defined by creating a file called conftest.py.
import pytest
@pytest.fixture
def important_value():
important = True
return important
Additionally, we can call separate files that also import the fixture from our configuration file. By calling test files specifically by name, instead of running multiple test files, we reduce the number of test results that we need to read in the terminal.
import pytest
def test_if_important(important_value):
assert important_value == True
Now, run the following command:
pytest test_important.py
Fixtures are modular. This means one or more fixtures may be dependent on another fixture. Therefore, if we change one fixture, it may result in changing the function of other fixtures. This causes our test suite to scale. This also works well in our configuration file.
One fixture simply requests the other, and hey, presto! We can combine all sorts of objects together, like strings, or do complex math.
@pytest.fixture
def me():
return "me"
@pytest.fixture
def together(me):
return "you and " + me
This flexibility gives us the ability to create all sorts of different combinations of fixtures. If that doesn’t make you happy, you can even combine two or more fixtures together. So, we add this next piece of code to our configuration file. The following fixture requests two inputs from two previous fixtures.
@pytest.fixture
def complete(together, happy):
return together + happy
This is followed by one more test to our modular test file:
def test_modular_complete(complete):
assert complete == "you and me are happy."
We’re going to test the previous code by also demonstrating how to single out specific tests in our test suite.
We can use a string to run only tests with a denoted string in the definition name. This will just be one test with a string name, like this:
pytest -k string_name
or multiple tests if we’ve used the string in many other tests as well:
pytest -v -k modular
We can, of course, use strings to name the whole test and thus successfully single out one test case:
pytest -v -k modular_complete
When we look at the default settings for fixtures, we have a few parameters we can use to further customize our tests to our needs.
@fixture(fixture_function = None, *, scope = 'function', params = None, autouse = False, ids = None, name = None)
You can look into these parameters in more detail in the pytest docs.
Additionally, we can give our fixtures names or IDs. This is helpful when we’re using the fixture in the same module. In these scenarios, we have to give these fixtures the prefix fixture_.
@pytest.fixture(name = "my_account")
def fixture_my_account():
balance = 0
return balance
We can set up all of the tests to automatically request a fixture by adding autouse=true to the fixtures decoration. Even when a test doesn’t request a fixture, it will get the input anyway:
@pytest.fixture(autouse=True)
def meaning_of_life():
return 42
Pytest fixtures have a range of uses for different situations. In short, different scopes destroy each fixture at different times in the tests and sessions. Each fixture is automatically defined as a function. Additionally, we can choose to use module, class, package, or session.
Here’s a more detailed explanation of fixtures scopes from the website Better Programming.
After the function, the module is the next most useful offering from fixtures. The genius behind the module is that it creates an object when a function requests the fixture. Accordingly, this object is then reused repeatedly while it’s being used by all the tests. When all the tests are complete, it’s torn down.
The advantage of the module scope is that it uses fewer resources since it only creates the object once instead of two or more times. For example, this fixture, taken from the pytest docs, helps test the SMPT connection from Gmail services. If we didn’t use the module scope, the tests would take longer to run.
@pytest.fixture(scope = "module")
def smtp_connection():
return smtplib.SMTP("smtp.gmail.com", 587, timeout = 5)
We must remember that the purpose of software testing with pytest fixtures is to find bugs, not to prove that there are no bugs.
Original article sourced at: https://www.testim.io
1670086500
In this article we will learn how to setup pytest with fixtures with example. Pytest has a number of great features. One of those special features is fixtures. Using pytest fixtures to test your application is one way you can exponentially increase code quality. Higher-quality code, plus more readable documentation, leads to a massive reduction in the cost of resources for our applications.
Pytest is one of the most popular testing modules for Python. Pytest is used for Python API test cases, database projects, artificial intelligence, and even for blockchain applications. Furthermore, pytest and its features, like fixtures, are highly configurable and doesn’t have much boilerplate. Having the ability to use pytest with fixtures alone can create a career path for any talented Python developer.
In this step-by-step guide, we’ll quickly go through how to set up pytest with fixtures. We’ll also go into detail into the different types of fixtures, with examples. By the end, you should have a good idea of how fixtures work in pytest.
Pytest fixtures are functions that can be used to manage our apps states and dependencies. Most importantly, they can provide data for testing and a wide range of value types when explicitly called by our testing software. You can use the mock data that fixtures create across multiple tests.
@pytest.fixture
def one():
return 1
Fixtures are very flexible and have multiple uses cases. Since Python is an object-oriented programming language, we can parse different types of objects to be used as test data such as integers, strings, lists, dictionaries, booleans, classes, floats, and other complex numbers.
And did I mention that it’s free and open source? Pytest also has over 900 plugins for developers to use. You can see a complete list of them here.
Now that you know what pytest fixtures are, let’s see how to use them. First, you need to have pytest installed on your machine.
Pytest can be installed on most Python environments, Jupyter Notebook, and Colab. The following guide assumes you’re installing it on a local machine.
virtualenv -p python3 folder_name
Start your local virtual environment. Replace pytest_example with your project folder’s name.
virtualenv -p python3 pytest_example
After creating the virtual environment, move into the new directory and activate it.
cd pytest_example
.\Scripts\activate
Run the following command to make sure that pytest is installed in your system:
pip install pytest
Create at least one pytest file. Keep in mind that both methods and test files need the test_ prefix or _test suffix to be recognized as a test file.
test_file_name.py reads better than file_name_test.py. Pytest will not run file_name.py or test_file_name.py.
Import the pytest module in all Python files you want to test, as well as in any associated configuration files for the fixtures.
Import pytest
We have to indicate that the function is a fixture with @pytest.fixture. These specific Python decorations let us know that the next method is a pytest fixture.
@pytest.fixture
Implementing the simplest pytest fixture can just return an object, like an integer.
@pytest.fixture
def one():
return 1
Either in the same file or a different test file, we can create tests that request the fixtures needed. This is how we can test their assertions. These test methods need to start with the prefix test_.
def test_we_are(one):
assert one == 1
Finally, tell pytest to test your code. Pytest will test all test files in the current directory and subdirectories with the correct prefix or suffix.
pytest
Adding -v gives the results more verbosity and detail to our tests. We can now see which specific tests have failed or passed.
pytest -v
When tests are run, which tests fail or pass are clearly labeled. Our asserts test whether a certain logic statement is true or not. AssertionError means that the assertion is false, and that’s why the test has failed. These play a particularly important role in testing for bugs in our system. Other errors we come across will be coding errors and bad naming conventions in our test code.
To make it easier on ourselves, we can define a fixture to use across multiple test files. These fixtures are defined by creating a file called conftest.py.
import pytest
@pytest.fixture
def important_value():
important = True
return important
Additionally, we can call separate files that also import the fixture from our configuration file. By calling test files specifically by name, instead of running multiple test files, we reduce the number of test results that we need to read in the terminal.
import pytest
def test_if_important(important_value):
assert important_value == True
Now, run the following command:
pytest test_important.py
Fixtures are modular. This means one or more fixtures may be dependent on another fixture. Therefore, if we change one fixture, it may result in changing the function of other fixtures. This causes our test suite to scale. This also works well in our configuration file.
One fixture simply requests the other, and hey, presto! We can combine all sorts of objects together, like strings, or do complex math.
@pytest.fixture
def me():
return "me"
@pytest.fixture
def together(me):
return "you and " + me
This flexibility gives us the ability to create all sorts of different combinations of fixtures. If that doesn’t make you happy, you can even combine two or more fixtures together. So, we add this next piece of code to our configuration file. The following fixture requests two inputs from two previous fixtures.
@pytest.fixture
def complete(together, happy):
return together + happy
This is followed by one more test to our modular test file:
def test_modular_complete(complete):
assert complete == "you and me are happy."
We’re going to test the previous code by also demonstrating how to single out specific tests in our test suite.
We can use a string to run only tests with a denoted string in the definition name. This will just be one test with a string name, like this:
pytest -k string_name
or multiple tests if we’ve used the string in many other tests as well:
pytest -v -k modular
We can, of course, use strings to name the whole test and thus successfully single out one test case:
pytest -v -k modular_complete
When we look at the default settings for fixtures, we have a few parameters we can use to further customize our tests to our needs.
@fixture(fixture_function = None, *, scope = 'function', params = None, autouse = False, ids = None, name = None)
You can look into these parameters in more detail in the pytest docs.
Additionally, we can give our fixtures names or IDs. This is helpful when we’re using the fixture in the same module. In these scenarios, we have to give these fixtures the prefix fixture_.
@pytest.fixture(autouse=True)
def meaning_of_life():
return 42
Pytest fixtures have a range of uses for different situations. In short, different scopes destroy each fixture at different times in the tests and sessions. Each fixture is automatically defined as a function. Additionally, we can choose to use module, class, package, or session.
Here’s a more detailed explanation of fixtures scopes from the website Better Programming.
After the function, the module is the next most useful offering from fixtures. The genius behind the module is that it creates an object when a function requests the fixture. Accordingly, this object is then reused repeatedly while it’s being used by all the tests. When all the tests are complete, it’s torn down.
The advantage of the module scope is that it uses fewer resources since it only creates the object once instead of two or more times. For example, this fixture, taken from the pytest docs, helps test the SMPT connection from Gmail services. If we didn’t use the module scope, the tests would take longer to run.
@pytest.fixture(scope = "module")
def smtp_connection():
return smtplib.SMTP("smtp.gmail.com", 587, timeout = 5)
We must remember that the purpose of software testing with pytest fixtures is to find bugs, not to prove that there are no bugs.
Original article sourced at: https://www.testim.io
1669024560
This tutorial looks at how to develop and test an asynchronous API with FastAPI, Postgres, pytest and Docker using Test-driven Development (TDD). We'll also use the Databases package for interacting with Postgres asynchronously.
Dependencies:
By the end of this tutorial you should be able to:
FastAPI is a modern, high-performance, batteries-included Python web framework that's perfect for building RESTful APIs. It can handle both synchronous and asynchronous requests and has built-in support for data validation, JSON serialization, authentication and authorization, and OpenAPI (version 3.0.2 as of writing) documentation.
Highlights:
Review the Features guide from the official docs for more info. It's also encouraged to review Alternatives, Inspiration, and Comparisons, which details how FastAPI compares to other web frameworks and technologies, for context.
Start by creating a folder to hold your project called "fastapi-crud". Then, add a docker-compose.yml file and a "src" folder to the project root. Within the "src" folder, add a Dockerfile, requirements.txt file, and an "app" folder. Finally, add the following files to the "app" folder: __init__.py and main.py.
You should now have:
fastapi-crud
├── docker-compose.yml
└── src
├── Dockerfile
├── app
│ ├── __init__.py
│ └── main.py
└── requirements.txt
Unlike Django or Flask, FastAPI does not have a built-in development server. So, we'll use Uvicorn, an ASGI server, to serve up FastAPI.
New to ASGI? Read through the excellent Introduction to ASGI: Emergence of an Async Python Web Ecosystem article.
Add FastAPI and Uvicorn to the requirements file:
fastapi==0.63.0
uvicorn==0.13.4
The fact that FastAPI does not come with a development server is both a positive and a negative in my opinion. On the one hand, it does take a bit more to serve up the app in development mode. On the other, this helps to conceptually separate the web framework from the web server, which is often a source of confusion for beginners when one moves from development to production with a web framework that does have a built-in development server (like Django or Flask).
Then, within main.py, create a new instance of FastAPI and set up a sanity check route:
from fastapi import FastAPI
app = FastAPI()
@app.get("/ping")
def pong():
return {"ping": "pong!"}
Install Docker, if you don't already have it, and then update the Dockerfile in the "src" directory:
# pull official base image
FROM python:3.9.4-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy requirements file
COPY ./requirements.txt /usr/src/app/requirements.txt
# install dependencies
RUN set -eux \
&& apk add --no-cache --virtual .build-deps build-base \
libressl-dev libffi-dev gcc musl-dev python3-dev \
&& pip install --upgrade pip setuptools wheel \
&& pip install -r /usr/src/app/requirements.txt \
&& rm -rf /root/.cache/pip
# copy project
COPY . /usr/src/app/
So, we started with an Alpine-based Docker image for Python 3.9.4. We then set a working directory along with two environment variables:
PYTHONDONTWRITEBYTECODE
: Prevents Python from writing pyc files to disc (equivalent to python -B
option)PYTHONUNBUFFERED
: Prevents Python from buffering stdout and stderr (equivalent to python -u
option)Finally, we copied over the requirements.txt file, installed some system-level dependencies, updated Pip, installed the requirements, and copied over the FastAPI app itself.
Review Docker for Python Developers for more on structuring Dockerfiles as well as some best practices for configuring Docker for Python-based development.
Next, add the following to the docker-compose.yml file in the project root:
version: '3.8'
services:
web:
build: ./src
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./src/:/usr/src/app/
ports:
- 8002:8000
So, when the container spins up, Uvicorn will run with the following settings:
--reload
enables auto-reload so the server will restart after changes are made to the code base.--workers 1
provides a single worker process.--host 0.0.0.0
defines the address to host the server on.--port 8000
defines the port to host the server on.app.main:app
tells Uvicorn where it can find the FastAPI ASGI application -- e.g., "within the 'app' module, you'll find the ASGI app, app = FastAPI()
, in the 'main.py' file.
For more on the Docker Compose file config, review the Compose file reference.
Build the image and spin up the container:
$ docker-compose up -d --build
Navigate to http://localhost:8002/ping. You should see:
{
"ping": "pong!"
}
You'll also be able to view the interactive API documentation, powered by Swagger UI, at http://localhost:8002/docs:
Create a "tests" folder in "src" and then add an __init__.py file to "tests" along with a test_main.py file:
from starlette.testclient import TestClient
from app.main import app
client = TestClient(app)
def test_ping():
response = client.get("/ping")
assert response.status_code == 200
assert response.json() == {"ping": "pong!"}
Here, we imported Starlette's TestClient, which uses the Requests library to make requests against the FastAPI app.
Add pytest and Requests to requirements.txt:
fastapi==0.63.0
uvicorn==0.13.4
# dev
pytest==6.2.3
requests==2.25.1
Update the image and then run the tests:
$ docker-compose up -d --build
$ docker-compose exec web pytest .
You should see:
================================ test session starts ================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 1 item
tests/test_main.py . [100%]
================================= 1 passed in 0.15s =================================
Before moving on, add a test_app
pytest fixture to a new file called src/tests/conftest.py:
import pytest
from starlette.testclient import TestClient
from app.main import app
@pytest.fixture(scope="module")
def test_app():
client = TestClient(app)
yield client # testing happens here
Update the test file as well so that it uses the fixture:
def test_ping(test_app):
response = test_app.get("/ping")
assert response.status_code == 200
assert response.json() == {"ping": "pong!"}
Your project structure should now look like this:
fastapi-crud
├── docker-compose.yml
└── src
├── Dockerfile
├── app
│ ├── __init__.py
│ └── main.py
├── requirements.txt
└── tests
├── __init__.py
├── conftest.py
└── test_main.py
Let's convert the synchronous handler over to an asynchronous one.
Rather than having to go through the trouble of spinning up a task queue (like Celery or RQ) or utilizing threads, FastAPI makes it easy to deliver routes asynchronously. As long as you don't have any blocking I/O calls in the handler, you can simply declare the handler as asynchronous by adding the async
keyword like so:
@app.get("/ping")
async def pong():
# some async operation could happen here
# example: `notes = await get_all_notes()`
return {"ping": "pong!"}
That's it. Update the handler in your code, and then make sure the tests still pass:
================================ test session starts ================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 1 item
tests/test_main.py . [100%]
================================= 1 passed in 0.14s =================================
Review the Concurrency and async / await guide for a technical deep dive into async.
Next, let's set up the basic CRUD routes, following RESTful best practices:
Endpoint | HTTP Method | CRUD Method | Result |
---|---|---|---|
/notes/ | GET | READ | get all notes |
/notes/:id/ | GET | READ | get a single note |
/notes/ | POST | CREATE | add a note |
/notes/:id/ | PUT | UPDATE | update a note |
/notes/:id/ | DELETE | DELETE | delete a note |
For each route, we'll:
Before diving in, let's add some structure to better organize the CRUD routes with FastAPI's APIRouter.
You can break up and modularize larger projects as well as apply versioning to your API with the
APIRouter
. If you're familiar with Flask, it is equivalent to a Blueprint.
First, add a new folder called "api" to the "app" folder. Add an __init__.py file to the newly created folder.
Now we can move the /ping
route to a new file called src/app/api/ping.py:
from fastapi import APIRouter
router = APIRouter()
@router.get("/ping")
async def pong():
# some async operation could happen here
# example: `notes = await get_all_notes()`
return {"ping": "pong!"}
Then, update main.py like so to remove the old route and wire the router up to our main app:
from fastapi import FastAPI
from app.api import ping
app = FastAPI()
app.include_router(ping.router)
Rename test_main.py to test_ping.py.
Make sure http://localhost:8002/ping and http://localhost:8002/docs still work. Also, be sure the tests still pass before moving on.
fastapi-crud
├── docker-compose.yml
└── src
├── Dockerfile
├── app
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ └── ping.py
│ └── main.py
├── requirements.txt
└── tests
├── __init__.py
├── conftest.py
└── test_ping.py
To configure Postgres, we'll need to add a new service to the docker-compose.yml file, add the appropriate environment variables, and install asyncpg.
First, add a new service called db
to docker-compose.yml:
version: '3.8'
services:
web:
build: ./src
command: |
bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000'
volumes:
- ./src/:/usr/src/app/
ports:
- 8002:8000
environment:
- DATABASE_URL=postgresql://hello_fastapi:hello_fastapi@db/hello_fastapi_dev
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=hello_fastapi
- POSTGRES_PASSWORD=hello_fastapi
- POSTGRES_DB=hello_fastapi_dev
volumes:
postgres_data:
To persist the data beyond the life of the container we configured a volume. This config will bind postgres_data
to the "/var/lib/postgresql/data/" directory in the container.
We also added an environment key to define a name for the default database and set a username and password.
Review the "Environment Variables" section of the Postgres Docker Hub page for more info.
Update the Dockerfile to install the appropriate packages required for asyncpg:
# pull official base image
FROM python:3.9.4-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy requirements file
COPY ./requirements.txt /usr/src/app/requirements.txt
# install dependencies
RUN set -eux \
&& apk add --no-cache --virtual .build-deps build-base \
libressl-dev libffi-dev gcc musl-dev python3-dev \
postgresql-dev bash \
&& pip install --upgrade pip setuptools wheel \
&& pip install -r /usr/src/app/requirements.txt \
&& rm -rf /root/.cache/pip
# copy project
COPY . /usr/src/app/
Add asyncpg to src/requirements.txt:
asyncpg==0.22.0
fastapi==0.63.0
uvicorn==0.13.4
# dev
pytest==6.2.3
requests==2.25.1
Next, add a db.py file to "src/app":
import os
from databases import Database
from sqlalchemy import create_engine, MetaData
DATABASE_URL = os.getenv("DATABASE_URL")
# SQLAlchemy
engine = create_engine(DATABASE_URL)
metadata = MetaData()
# databases query builder
database = Database(DATABASE_URL)
Here, using the database URI and credentials that we just configured in the Docker Compose file, we created a SQLAlchemy engine (used for communicating with the database) along with a Metadata instance (used for creating the database schema). We also created a new Database instance from databases.
databases is an async SQL query builder that works on top of the SQLAlchemy Core expression language. It supports the following methods:
database.fetch_all(query)
database.fetch_one(query)
database.iterate(query)
database.execute(query)
database.execute_many(query)
Review the Async SQL (Relational) Databases guide and the Starlette Database docs for more details on working with databases asynchronously.
Update the requirements:
asyncpg==0.22.0
databases[postgresql]==0.4.3
fastapi==0.63.0
psycopg2-binary==2.8.6
SQLAlchemy==1.3.24
uvicorn==0.13.4
# dev
pytest==6.2.3
requests==2.25.1
We're installing Psycopg since we will be using create_all, which is a synchronous SQLAlchemy function.
Add a notes
model to src/app/db.py:
import os
from sqlalchemy import (
Column,
DateTime,
Integer,
MetaData,
String,
Table,
create_engine
)
from sqlalchemy.sql import func
from databases import Database
DATABASE_URL = os.getenv("DATABASE_URL")
# SQLAlchemy
engine = create_engine(DATABASE_URL)
metadata = MetaData()
notes = Table(
"notes",
metadata,
Column("id", Integer, primary_key=True),
Column("title", String(50)),
Column("description", String(50)),
Column("created_date", DateTime, default=func.now(), nullable=False),
)
# databases query builder
database = Database(DATABASE_URL)
Wire up the database and the model in main.py and add startup and shutdown event handlers for connecting to and disconnecting from the database:
from fastapi import FastAPI
from app.api import ping
from app.db import engine, database, metadata
metadata.create_all(engine)
app = FastAPI()
@app.on_event("startup")
async def startup():
await database.connect()
@app.on_event("shutdown")
async def shutdown():
await database.disconnect()
app.include_router(ping.router)
Build the new image and spin up the two containers:
$ docker-compose up -d --build
Ensure the notes
table was created:
$ docker-compose exec db psql --username=hello_fastapi --dbname=hello_fastapi_dev
psql (13.2)
Type "help" for help.
hello_fastapi_dev=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-------------------+---------------+----------+------------+------------+---------------------------------
hello_fastapi_dev | hello_fastapi | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | hello_fastapi | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | hello_fastapi | UTF8 | en_US.utf8 | en_US.utf8 | =c/hello_fastapi +
| | | | | hello_fastapi=CTc/hello_fastapi
template1 | hello_fastapi | UTF8 | en_US.utf8 | en_US.utf8 | =c/hello_fastapi +
| | | | | hello_fastapi=CTc/hello_fastapi
(4 rows)
hello_fastapi_dev=# \c hello_fastapi_dev
You are now connected to database "hello_fastapi_dev" as user "hello_fastapi".
hello_fastapi_dev=# \dt
List of relations
Schema | Name | Type | Owner
--------+-------+-------+---------------
public | notes | table | hello_fastapi
(1 row)
hello_fastapi_dev=# \q
First time using Pydantic? Review the Overview guide from the official docs.
Create a NoteSchema
Pydantic model with two required fields, title
and description
, in a new file called models.py in "src/app/api":
from pydantic import BaseModel
class NoteSchema(BaseModel):
title: str
description: str
NoteSchema
will be used for validating the payloads for creating and updating notes.
Let's break from the normal TDD flow for this first route in order to establish the coding pattern that we'll use for the remaining routes.
Create a new file called notes.py in the "src/app/api" folder:
from fastapi import APIRouter, HTTPException
from app.api import crud
from app.api.models import NoteDB, NoteSchema
router = APIRouter()
@router.post("/", response_model=NoteDB, status_code=201)
async def create_note(payload: NoteSchema):
note_id = await crud.post(payload)
response_object = {
"id": note_id,
"title": payload.title,
"description": payload.description,
}
return response_object
Here, we defined a handler that expects a payload, payload: NoteSchema
, with a title and a description.
Essentially, when the route is hit with a POST request, FastAPI will read the body of the request and validate the data:
payload
parameter. FastAPI also generates JSON Schema definitions that are then used to automatically generate the OpenAPI schema and the API documentation.Review the Request Body docs for more info.
It's worth noting that we used the async
declaration here since the database communication will be asynchronous. In other words, there are no blocking I/O operations in the handler.
Next, create a new file called crud.py in the "src/app/api" folder:
from app.api.models import NoteSchema
from app.db import notes, database
async def post(payload: NoteSchema):
query = notes.insert().values(title=payload.title, description=payload.description)
return await database.execute(query=query)
We added a utility function called post
for creating new notes that takes a payload object and then:
Next, we need to define a new Pydantic model for use as the response_model:
@router.post("/", response_model=NoteDB, status_code=201)
Update models.py like so:
from pydantic import BaseModel
class NoteSchema(BaseModel):
title: str
description: str
class NoteDB(NoteSchema):
id: int
The NoteDB
model inherits from the NoteSchema
model, adding an id
field.
Wire up the new router in main.py:
from fastapi import FastAPI
from app.api import notes, ping
from app.db import database, engine, metadata
metadata.create_all(engine)
app = FastAPI()
@app.on_event("startup")
async def startup():
await database.connect()
@app.on_event("shutdown")
async def shutdown():
await database.disconnect()
app.include_router(ping.router)
app.include_router(notes.router, prefix="/notes", tags=["notes"])
Take note of the prefix URL along with the "notes"
tag, which will be applied to the OpenAPI schema (for grouping operations).
Test it out with curl or HTTPie:
$ http --json POST http://localhost:8002/notes/ title=foo description=bar
You should see:
HTTP/1.1 201 Created
content-length: 42
content-type: application/json
date: Tue, 20 Apr 2021 23:37:01 GMT
server: uvicorn
{
"description": "bar",
"id": 1,
"title": "foo"
}
You can also interact with the endpoint at http://localhost:8002/docs.
Add the following test to a new test file called src/tests/test_notes.py:
import json
import pytest
from app.api import crud
def test_create_note(test_app, monkeypatch):
test_request_payload = {"title": "something", "description": "something else"}
test_response_payload = {"id": 1, "title": "something", "description": "something else"}
async def mock_post(payload):
return 1
monkeypatch.setattr(crud, "post", mock_post)
response = test_app.post("/notes/", data=json.dumps(test_request_payload),)
assert response.status_code == 201
assert response.json() == test_response_payload
def test_create_note_invalid_json(test_app):
response = test_app.post("/notes/", data=json.dumps({"title": "something"}))
assert response.status_code == 422
This test uses the pytest monkeypatch fixture to mock out the crud.post
function. We then asserted that the endpoint responds with the expected status codes and response body.
$ docker-compose exec web pytest .
================================ test session starts ================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 3 items
tests/test_notes.py .. [ 66%]
tests/test_ping.py . [100%]
================================= 3 passed in 0.26s =================================
With that, we can configure the remaining CRUD routes using Test-driven Development.
fastapi-crud
├── docker-compose.yml
└── src
├── Dockerfile
├── app
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── crud.py
│ │ ├── models.py
│ │ ├── notes.py
│ │ └── ping.py
│ ├── db.py
│ └── main.py
├── requirements.txt
└── tests
├── __init__.py
├── conftest.py
├── test_notes.py
└── test_ping.py
Add the following tests:
def test_read_note(test_app, monkeypatch):
test_data = {"id": 1, "title": "something", "description": "something else"}
async def mock_get(id):
return test_data
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.get("/notes/1")
assert response.status_code == 200
assert response.json() == test_data
def test_read_note_incorrect_id(test_app, monkeypatch):
async def mock_get(id):
return None
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.get("/notes/999")
assert response.status_code == 404
assert response.json()["detail"] == "Note not found"
They should fail:
================================ test session starts ================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 5 items
tests/test_notes.py ..FF [ 80%]
tests/test_ping.py . [100%]
===================================== FAILURES ======================================
__________________________________ test_read_note ___________________________________
test_app = <starlette.testclient.TestClient object at 0x7f60d5ca1910>
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f60d5ca19a0>
def test_read_note(test_app, monkeypatch):
test_data = {"id": 1, "title": "something", "description": "something else"}
async def mock_get(id):
return test_data
> monkeypatch.setattr(crud, "get", mock_get)
E AttributeError: <module 'app.api.crud' from '/usr/src/app/app/api/crud.py'>
has no attribute 'get'
tests/test_notes.py:34: AttributeError
____________________________ test_read_note_incorrect_id ____________________________
test_app = <starlette.testclient.TestClient object at 0x7f60d5ca1910>
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f60d5c9c7c0>
def test_read_note_incorrect_id(test_app, monkeypatch):
async def mock_get(id):
return None
> monkeypatch.setattr(crud, "get", mock_get)
E AttributeError: <module 'app.api.crud' from '/usr/src/app/app/api/crud.py'>
has no attribute 'get'
tests/test_notes.py:45: AttributeError
============================ 2 failed, 3 passed in 0.36s ============================
Add the handler:
@router.get("/{id}/", response_model=NoteDB)
async def read_note(id: int):
note = await crud.get(id)
if not note:
raise HTTPException(status_code=404, detail="Note not found")
return note
Here, instead of taking a payload, the handler requires an id
, an integer, which will come from the path -- i.e., /notes/5/.
Add the get
utility function to crud.py:
async def get(id: int):
query = notes.select().where(id == notes.c.id)
return await database.fetch_one(query=query)
Before moving on, ensure the tests pass and manually test the new endpoint in the browser, with curl or HTTPie, and/or via the API documentation.
Next, add a test for reading all notes:
def test_read_all_notes(test_app, monkeypatch):
test_data = [
{"title": "something", "description": "something else", "id": 1},
{"title": "someone", "description": "someone else", "id": 2},
]
async def mock_get_all():
return test_data
monkeypatch.setattr(crud, "get_all", mock_get_all)
response = test_app.get("/notes/")
assert response.status_code == 200
assert response.json() == test_data
Again, make sure the test fails.
@router.get("/", response_model=List[NoteDB])
async def read_all_notes():
return await crud.get_all()
Import List from Python's typing module:
from typing import List
The response_model
is a List
with a NoteDB
subtype.
Add the CRUD util:
async def get_all():
query = notes.select()
return await database.fetch_all(query=query)
Make sure the automated tests pass. Manually test this endpoint as well.
def test_update_note(test_app, monkeypatch):
test_update_data = {"title": "someone", "description": "someone else", "id": 1}
async def mock_get(id):
return True
monkeypatch.setattr(crud, "get", mock_get)
async def mock_put(id, payload):
return 1
monkeypatch.setattr(crud, "put", mock_put)
response = test_app.put("/notes/1/", data=json.dumps(test_update_data))
assert response.status_code == 200
assert response.json() == test_update_data
@pytest.mark.parametrize(
"id, payload, status_code",
[
[1, {}, 422],
[1, {"description": "bar"}, 422],
[999, {"title": "foo", "description": "bar"}, 404],
],
)
def test_update_note_invalid(test_app, monkeypatch, id, payload, status_code):
async def mock_get(id):
return None
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.put(f"/notes/{id}/", data=json.dumps(payload),)
assert response.status_code == status_code
This test uses the pytest parametrize decorator to parametrize the arguments for the test_update_note_invalid
function.
Handler:
@router.put("/{id}/", response_model=NoteDB)
async def update_note(id: int, payload: NoteSchema):
note = await crud.get(id)
if not note:
raise HTTPException(status_code=404, detail="Note not found")
note_id = await crud.put(id, payload)
response_object = {
"id": note_id,
"title": payload.title,
"description": payload.description,
}
return response_object
Util:
async def put(id: int, payload: NoteSchema):
query = (
notes
.update()
.where(id == notes.c.id)
.values(title=payload.title, description=payload.description)
.returning(notes.c.id)
)
return await database.execute(query=query)
def test_remove_note(test_app, monkeypatch):
test_data = {"title": "something", "description": "something else", "id": 1}
async def mock_get(id):
return test_data
monkeypatch.setattr(crud, "get", mock_get)
async def mock_delete(id):
return id
monkeypatch.setattr(crud, "delete", mock_delete)
response = test_app.delete("/notes/1/")
assert response.status_code == 200
assert response.json() == test_data
def test_remove_note_incorrect_id(test_app, monkeypatch):
async def mock_get(id):
return None
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.delete("/notes/999/")
assert response.status_code == 404
assert response.json()["detail"] == "Note not found"
Handler:
@router.delete("/{id}/", response_model=NoteDB)
async def delete_note(id: int):
note = await crud.get(id)
if not note:
raise HTTPException(status_code=404, detail="Note not found")
await crud.delete(id)
return note
Util:
async def delete(id: int):
query = notes.delete().where(id == notes.c.id)
return await database.execute(query=query)
Make sure all tests pass:
================================ test session starts ================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 12 items
tests/test_notes.py ........... [ 91%]
tests/test_ping.py . [100%]
================================ 12 passed in 0.56s =================================
Let's add some additional validation to the routes, checking that:
id
is greater than 0 for reading a single note, updating a note, and deleting a notetitle
and description
fields from the request payloads must have lengths >= 3 and <= 50 for adding and updating a noteUpdate the test_read_note_incorrect_id
test:
def test_read_note_incorrect_id(test_app, monkeypatch):
async def mock_get(id):
return None
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.get("/notes/999")
assert response.status_code == 404
assert response.json()["detail"] == "Note not found"
response = test_app.get("/notes/0")
assert response.status_code == 422
The test should fail:
> assert response.status_code == 422
E assert 404 == 422
E + where 404 = <Response [404]>.status_code
Update the handler:
@router.get("/{id}/", response_model=NoteDB)
async def read_note(id: int = Path(..., gt=0),):
note = await crud.get(id)
if not note:
raise HTTPException(status_code=404, detail="Note not found")
return note
Make sure to import Path
:
from fastapi import APIRouter, HTTPException, Path
So, we added the following metadata to the parameter with Path:
...
- the value is required (Ellipsis)gt
- the value must be greater than 0The tests should pass. Try out the API documentation as well:
Update the test_create_note_invalid_json
test:
def test_create_note_invalid_json(test_app):
response = test_app.post("/notes/", data=json.dumps({"title": "something"}))
assert response.status_code == 422
response = test_app.post("/notes/", data=json.dumps({"title": "1", "description": "2"}))
assert response.status_code == 422
To get the test to pass, update the NoteSchema
model like so:
class NoteSchema(BaseModel):
title: str = Field(..., min_length=3, max_length=50)
description: str = Field(..., min_length=3, max_length=50)
Here, we added additional validation to the Pydantic model with Field. It works just like Path
.
Add the import:
from pydantic import BaseModel, Field
Add three more scenarios to test_update_note_invalid
:
@pytest.mark.parametrize(
"id, payload, status_code",
[
[1, {}, 422],
[1, {"description": "bar"}, 422],
[999, {"title": "foo", "description": "bar"}, 404],
[1, {"title": "1", "description": "bar"}, 422],
[1, {"title": "foo", "description": "1"}, 422],
[0, {"title": "foo", "description": "bar"}, 422],
],
)
def test_update_note_invalid(test_app, monkeypatch, id, payload, status_code):
async def mock_get(id):
return None
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.put(f"/notes/{id}/", data=json.dumps(payload),)
assert response.status_code == status_code
Handler:
@router.put("/{id}/", response_model=NoteDB)
async def update_note(payload: NoteSchema, id: int = Path(..., gt=0),):
note = await crud.get(id)
if not note:
raise HTTPException(status_code=404, detail="Note not found")
note_id = await crud.put(id, payload)
response_object = {
"id": note_id,
"title": payload.title,
"description": payload.description,
}
return response_object
Test:
def test_remove_note_incorrect_id(test_app, monkeypatch):
async def mock_get(id):
return None
monkeypatch.setattr(crud, "get", mock_get)
response = test_app.delete("/notes/999/")
assert response.status_code == 404
assert response.json()["detail"] == "Note not found"
response = test_app.delete("/notes/0/")
assert response.status_code == 422
Handler:
@router.delete("/{id}/", response_model=NoteDB)
async def delete_note(id: int = Path(..., gt=0)):
note = await crud.get(id)
if not note:
raise HTTPException(status_code=404, detail="Note not found")
await crud.delete(id)
return note
The tests should pass:
================================ test session starts ================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /usr/src/app
collected 15 items
tests/test_notes.py .............. [ 93%]
tests/test_ping.py . [100%]
================================ 15 passed in 0.45s =================================
We built a synchronous flavor of this API for so you can compare the two models. You can grab the code from the fastapi-crud-sync repo. Try conducting some performance tests against both versions on your own with ApacheBench.
In this tutorial, we covered how to develop and test an asynchronous API with FastAPI, Postgres, pytest, and Docker using Test-driven Development.
With Flask-like simplicity, Django-like batteries, and Go/Node-like performance, FastAPI is a powerful framework that makes it easy and fun to spin up RESTful APIs. Check your understanding by reviewing the objectives from the beginning of this tutorial and going through each of the challenges below.
Looking for some more challenges?
You can find the source code in the fastapi-crud-async repo. Thanks for reading!
Original article source at: https://testdriven.io/
1668761640
This articles serves as a guide to testing Flask applications with pytest.
We'll first look at why testing is important for creating maintainable software and what you should focus on when testing. Then, we'll detail how to:
The source code (along with detailed installation instructions) for the Flask app being tested in this article can be found on GitLab at https://gitlab.com/patkennedy79/flask_user_management_example.
By the end of this article, you will be able to:
In general, testing helps ensure that your app will work as expected for your end users.
Software projects with high test coverage are never perfect, but it's a good initial indicator of the quality of the software. Additionally, testable code is generally a sign of a good software architecture, which is why advanced developers take testing into account throughout the entire development lifecycle.
Tests can be considered at three levels:
Unit tests test the functionality of an individual unit of code isolated from its dependencies. They are the first line of defense against errors and inconsistencies in your codebase. They test from the inside out, from the programmer's point of view.
Functional tests test multiple components of a software product to make sure the components are working together properly. Typically, these tests focus on functionality that the user will be utilizing. They test from the outside in, from the end user's point of view.
Both unit and functional testing are fundamental parts of the Test-Driven Development (TDD) process.
Testing improves the maintainability of your code.
Maintainability refers to making bug fixes or enhancements to your code or to another developer needing to update your code at some point in the future.
Testing should be combined with a Continuous Integration (CI) process to ensure that your tests are constantly being executed, ideally on each commit to your repository. A solid suite of tests can be critical to catching defects quickly and early in the development process before your end users come across them in production.
What should you test?
Again, unit tests should focus on testing small units of code in isolation.
For example, in a Flask app, you may use unit tests to test:
Functional tests, meanwhile, should focus on how the view functions operate.
For example:
Focus on testing scenarios that the end user will interact with. The experience that the users of your product have is paramount!
pytest is a test framework for Python used to write, organize, and run test cases. After setting up your basic test structure, pytest makes it really easy to write tests and provides a lot of flexibility for running the tests. pytest satisfies the key aspects of a good test environment:
pytest is incredible! I highly recommend using it for testing any application or script written in Python.
If you're interested in really learning all the different aspects of pytest, I highly recommend the Python Testing with pytest book by Brian Okken.
Python has a built-in test framework called unittest, which is a great choice for testing as well. The unittest module is inspired by the xUnit test framework.
It provides the following:
assert
statements for performing checksThe main differences between pytest and unittest:
Feature | pytest | unittest |
---|---|---|
Installation | Third-party library | Part of the core standard library |
Test setup and teardown | fixtures | setUp() and tearDown() methods |
Assertion Format | Built-in assert | assert* style methods |
Structure | Functional | Object-oriented |
Either framework is good for testing a Flask project. However, I prefer pytest since it:
assert
statement, which is far more readable and easier to remember compared to the assertSomething
methods -- like assertEquals
, assertTrue
, and assertContains
-- in unittest.I like to organize all the test cases in a separate "tests" folder at the same level as the application files.
Additionally, I really like differentiating between unit and functional tests by splitting them out as separate sub-folders. This structure gives you the flexibility to easily run just the unit tests (or just the functional tests, for that matter).
Here's an example of the structure of the "tests" directory:
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
And, here's how the "tests" folder fits into a typical Flask project with blueprints:
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
The first test that we're going to write is a unit test for project/models.py, which contains the SQLAlchemy interface to the database.
This test doesn't access the underlying database; it only checks the interface class used by SQLAlchemy.
Since this test is a unit test, it should be implemented in tests/unit/test_models.py:
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Let's take a closer look at this test.
After the import, we start with a description of what the test does:
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
Why include so many comments for a test function?
I've found that tests are one of the most difficult aspects of a project to maintain. Often, the code (including the level of comments) for test suites is nowhere near the level of quality as the code being tested.
A common structure used to describe what each test function does helps with maintainability by making it easier for a someone (another developer, your future self) to quickly understand the purpose of each test.
A common practice is to use the GIVEN-WHEN-THEN structure:
- GIVEN - what are the initial conditions for the test?
- WHEN - what is occurring that needs to be tested?
- THEN - what is the expected response?
For more, review the GivenWhenThen article by Martin Fowler and the Python Testing with pytest book by Brian Okken.
Next, we have the actual test:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
After creating a new user
with valid arguments to the constructor, the properties of the user
are checked to make sure it was created properly.
The second test that we're going to write is a functional test for project/recipes/routes.py, which contains the view functions for the recipes
blueprint.
Since this test is a functional test, it should be implemented in tests/functional/test_recipes.py:
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
This project uses the Application Factory Pattern to create the Flask application. Therefore, the create_app()
function needs to first be imported:
from project import create_app
The test function, test_home_page()
, starts with the GIVEN-WHEN-THEN description of what the test does. Next, a Flask application (flask_app
) is created:
flask_app = create_app('flask_test.cfg')
In order to create the proper environment for testing, Flask provides a test_client helper. This creates a test version of our Flask application, which we used to make a GET call to the '/' URL. We then check that the status code returned is OK (200) and that the response contained the following strings:
These checks match with what we expect the user to see when we navigate to the '/' URL:
An example of an off-nominal functional test would be to utilize an invalid HTTP method (POST) when accessing the '/' URL:
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
This test checks that a POST request to the '/' URL results in an error code of 405 (Method Not Allowed) being returned.
Take a second to review the two functional tests... do you see some duplicate code between these two test functions? Do you see a lot of code for initializing the state needed by the test functions? We can use fixtures to address these issues.
Fixtures initialize tests to a known state in order to run tests in a predictable and repeatable manner.
The classic approach to writing and executing tests follows the the xUnit type of test framework, where each test runs as follows:
SetUp()
TearDown()
The SetUp()
and TearDown()
methods always run for each unit test within a test suite. This approach results in the same initial state for each test within a test suite, which doesn't provide much flexibility.
The test fixture approach provides much greater flexibility than the classic Setup/Teardown approach.
pytest-flask facilitates testing Flask apps by providing a set of common fixtures used for testing Flask apps. This library is not used in this tutorial, as I want to show how to create the fixtures that help support testing Flask apps.
First, fixtures are defined as functions (that should have a descriptive names for their purpose).
Second, multiple fixtures can be run to set the initial state for a test function. In fact, fixtures can even call other fixtures! So, you can compose them together to create the required state.
Finally, fixtures can be run with different scopes:
function
- run once per test function (default scope)class
- run once per test classmodule
- run once per module (e.g., a test file)session
- run once per sessionFor example, if you have a fixture with module scope, that fixture will run once (and only once) before the test functions in the module run.
Fixtures should be created in tests/conftest.py.
To help facilitate testing the User
class in project/models.py, we can add a fixture to tests/conftest.py that is used to create a User
object to test:
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
The @pytest.fixture
decorator specifies that this function is a fixture with module
-level scope. In other words, this fixture will be called one per test module.
This fixture, new_user
, creates an instance of User
using valid arguments to the constructor. user
is then passed to the test function (return user
).
We can simplify the test_new_user()
test function from earlier by using the new_user
fixture in tests/unit/test_models.py:
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
By using a fixture, the test function is reduced to the assert
statements that perform the checks against the User
object.
To help facilitate testing all the view functions in the Flask project, a fixture can be created in tests/conftest.py:
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
This fixture creates the test client using a context manager:
with flask_app.test_client() as testing_client:
Next, the Application context is pushed onto the stack for use by the test functions:
with flask_app.app_context():
yield testing_client # this is where the testing happens!
To learn more about the Application context in Flask, refer to the following blog posts:
The yield testing_client
statement means that execution is being passed to the test functions.
We can simplify the functional tests from earlier with the test_client
fixture in tests/functional/test_recipes.py:
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Did you notice that much of the duplicate code is gone? By utilizing the test_client
fixture, each test function is simplified down to the HTTP call (GET or POST) and the assert that checks the response.
I really find that using fixtures helps to focus the test function on actually doing the testing, as the test initialization is handled in the fixture.
To run the tests, navigate to the top-level folder of the Flask project and run pytest through the Python interpreter:
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
Why run pytest through the Python interpreter?
The main advantage is that the current directory (e.g., the top-level folder of the Flask project) is added to the system path. This avoids any problems with pytest not being able to find the source code.
pytest will recursively search through your project structure to find the Python files that start with test_*.py
and then run the functions that start with test_
in those files. There is no configuration needed to identify where the test files are located!
To see more details on the tests that were run:
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
If you only want to run a specific type of test:
python -m pytest tests/unit/
python -m pytest tests/functional/
To really get a sense of when the test_client()
fixture is run, pytest can provide a call structure of the fixtures and tests with the --setup-show
argument:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
The test_client
fixture has a 'module' scope, so it's executed prior to the two _with_fixture tests in tests/functional/test_recipes.py.
If you change the scope of the test_client
fixture to a 'function' scope:
@pytest.fixture(scope='function')
Then the test_client
fixture will run prior to each of the two _with_fixture tests:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Since we want the test_client
fixture to only be run once in this module, revert the scope back to 'module'.
When developing tests, it's nice to get an understanding of how much of the source code is actually tested. This concept is known as code coverage.
I need to be very clear that having a set of tests that covers 100% of the source code is by no means an indicator that the code is properly tested.
This metric means that there are a lot of tests and a lot of effort has been put into developing the tests. The quality of the tests still needs to be checked by code inspection.
That said, the other extreme, where this is a minimal set (or none!) of tests, is much worse!
There are two excellent packages available for determining code coverage: coverage.py and pytest-cov.
I recommend using pytest-cov based on its seamless integration with pytest. It's built on top of coverage.py, from Ned Batchelder, which is the standard in code coverage for Python.
Running pytest when checking for code coverage requires the --cov
argument to indicate which Python package (project
in the Flask project structure) to check the coverage of:
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
Even when checking code coverage, arguments can still be passed to pytest:
(venv)$ python -m pytest --setup-show --cov=project
This article served as a guide for testing Flask applications, focusing on:
If you're interested in learning more about Flask, check out my course on how to build, test, and deploy Flask applications:
Original article source at: https://testdriven.io/
1668507492
Although writing tests at first may look like it prolongs the development process, it saves you a lot of time in the long run.
Well-written tests decrease the possibility of something breaking in a production environment by ensuring your code is doing what you expected. Tests also help you cover marginal cases and make refactoring easier.
In this article, we'll look at how to use pytest, so you'll be able to use it on your own to improve your development process and follow more advanced pytest tutorials.
By the end of this article, you'll be able to:
Although often overlooked, testing is so vital that Python comes with its own built-in testing framework called unittest. Writing tests in unittest can be complicated, though, so in recent years, the pytest framework has become the standard.
Some significant advantages of pytest are:
assertEquals
, assertTrue
)Since this is a guide rather than a tutorial, we've prepared a simple FastAPI application that you can refer to as you're going through this article. You can clone it from GitHub.
On the basic branch, our API has 4 endpoints (defined in main.py) that use functions from calculations.py to return a result from performing a certain basic arithmetic operation (+
/-
/*
//
) on two integers. On the advanced_topics branch, there are two more functionalities added:
CalculationsStoreJSON
(inside store_calculations.py) class - allows you to store and retrieve calculations to/from a JSON file.get_number_fact
(inside number_facts.py) - makes a call to a remote API to retrieve a fact about a certain number.No knowledge of FastAPI is required to understand this article.
We'll use the basics branch for the first part of this article.
Create and activate the virtual environment and install the requirements:
$ python3.10 -m venv venv
$ source venv/bin/activate
(venv)$ pip install -r requirements.txt
To organize your tests, you can use three possibilities, all of which are used in the example project:
Organized in | Example |
---|---|
Python package (folder including an __init__.py file) | "test_calculations" |
Module | test_commutative_operations.py |
Class | TestCalculationEndpoints |
When it comes to best practices for organizing tests, each programmer has their own preferences.
The purpose of this article is not to show best practices but, instead, to show you all possibilities.
pytest will discover tests on its own if you abide by the following conventions:
test_
or ends with _test.py
(e.g., test_foo.py
or foo_test.py
)test_
(e.g., def test_foo()
)Test
(e.g., class TestFoo
)The tests not following the naming convention will not be found, so be careful with your naming.
It's worth noting that the naming convention can be changed on the command line or a configuration file).
Let's see what the test_return_sum
(in the test_calculation_endpoints.py file) test function looks like:
# tests/test_endpoints/test_calculation_endpoints.py
def test_return_sum(self):
# Arrange
test_data = {
"first_val": 10,
"second_val": 8
}
client = TestClient(app)
# Act
response = client.post("/sum/", json=test_data)
# Assert
assert response.status_code == 200
assert response.json() == 18
Each test function, according to the pytest documentation, consists of four steps:
test_data = {"first_val": 10, "second_val": 8}
)client.post("/sum/", json=test_data)
)assert response.json() == 18
)pytest gives you a lot of control as to which tests you want to run:
Let's see how this works...
If you're following along with our sample application,
pytest
is already installed if you installed the requirements.For your own projects,
pytest
can be installed as any other package with pip:(venv)$ pip install pytest
Running the pytest
command will simply run all the tests that pytest can find:
(venv)$ python -m pytest
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 8 items
tests/test_calculations/test_anticommutative_operations.py .. [ 25%]
tests/test_calculations/test_commutative_operations.py .. [ 50%]
tests/test_endpoints/test_calculation_endpoints.py .... [100%]
================================ 8 passed in 5.19s ================================
pytest will inform you how many tests are found and which modules the tests were found in. In our example app, pytest found 8 tests, and they all passed.
At the bottom of the message, you can see how many tests passed/failed.
As already discussed, tests that don't abide by the proper naming convention will simply not be found. Wrongly named tests don't produce any error, so you need to be mindful of that.
For example, if you rename the TestCalculationEndpoints
class to CalculationEndpointsTest
, all the tests inside it simply won't run:
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 4 items
tests/test_calculations/test_anticommutative_operations.py .. [ 50%]
tests/test_calculations/test_commutative_operations.py .. [100%]
================================ 4 passed in 0.15s ================================
Change the name back to TestCalculationEndpoints
before moving on.
Your test won't always pass on the first try.
Corrupt the predicted output in the assert
statement in test_calculate_sum
to see what the output for a failing test looks like:
# tests/test_calculations/test_commutative_operations.py
def test_calculate_sum():
calculation = calculate_sum(5, 3)
assert calculation == 7 # whops, a mistake
Run the test. You should see something similar to:
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 8 items
tests/test_calculations/test_anticommutative_operations.py .. [ 25%]
tests/test_calculations/test_commutative_operations.py F. [ 50%]
tests/test_endpoints/test_calculation_endpoints.py .... [100%]
==================================== FAILURES =====================================
_______________________________ test_calculate_sum ________________________________
def test_calculate_sum():
calculation = calculate_sum(5, 3)
> assert calculation == 7
E assert 8 == 7
tests/test_calculations/test_commutative_operations.py:8: AssertionError
============================= short test summary info =============================
FAILED tests/test_calculations/test_commutative_operations.py::test_calculate_sum
=========================== 1 failed, 7 passed in 0.26s ===========================
At the bottom of the message, you can see a short test summary info section. This tells you which test failed and where. In this case, the actual output -- 8
-- doesn't match the expected one -- 7
.
If you scroll a little higher, the failing test is displayed in detail, so it's easier to pinpoint what went wrong (helpful with more complex tests).
Fix this test before moving on.
To run a specific package or module, you just need to add a full relative path to the specific test set to the pytest command.
For a package:
(venv)$ python -m pytest tests/test_calculations
This command will run all the tests inside the "tests/test_calculations" package.
For a module:
(venv)$ python -m pytest tests/test_calculations/test_commutative_operations.py
This command will run all the tests inside the tests/test_calculations/test_commutative_operations.py module.
The output of both will be similar to the previous one, except the number of executed tests will be smaller.
To access a specific class in pytest, you need to write a relative path to its module and then add the class after ::
:
(venv)$ python -m pytest tests/test_endpoints/test_calculation_endpoints.py::TestCalculationEndpoints
This command will execute all tests inside the TestCalculationEndpoints
class.
You can access a specific test the same way as the class, with two colons after the relative path, followed by the test name:
(venv)$ python -m pytest tests/test_calculations/test_commutative_operations.py::test_calculate_sum
If the function you wish to run is inside a class, a single test needs to be run in the following form:
relative_path_to_module::TestClass::test_method
For example:
(venv)$ python -m pytest tests/test_endpoints/test_calculation_endpoints.py::TestCalculationEndpoints::test_return_sum
Now, let's say you only want to run tests dealing with division. Since we included the word "divided" in the test name for tests that deal with division, you can run just those tests like so:
(venv)$ python -m pytest -k "dividend"
So, 2 out of 8 tests will run:
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 8 items / 6 deselected / 2 selected
tests/test_calculations/test_anticommutative_operations.py . [ 50%]
tests/test_endpoints/test_calculation_endpoints.py . [100%]
========================= 2 passed, 6 deselected in 0.18s =========================
Those are not the only ways to select a specific subset of tests. Refer to the official documentation for more info.
pytest includes many flags; you can list all of them with the pytest --help
command.
Among the most useful are:
pytest -v
increases verbosity for one level, and pytest -vv
increases it for two levels. For example, when using parametrization (running the same test multiple times with different inputs/outputs), running just pytest
informs you how many test versions passed and how many failed while adding -v
also outputs which parameters were used. If you add -vv
, you'll see each test version with the input parameters. You can see a much more detailed example on the pytest docs.pytest -lf
re-runs only the tests that failed during the last run. If there are no failures, all the tests will run.-x
flag causes pytest to exit instantly on the first error or failed test.We covered the basics and are now moving to more advanced topics.
If you're following along with the repo, switch the branch from basics to advanced_topics (
git checkout advanced_topics
).
Sometimes, a single example input for your test will suffice, but there are also many occasions that you'll want to test multiple inputs -- e.g., emails, passwords, etc.
You can add multiple inputs and their respective outputs with parameterizing via the @pytest.mark.parametrize
decorator.
For example, with anti-commutative operations, the order of the numbers passed matters. It would be smart to cover more cases to ensure that the function works correctly for all the cases:
# tests/test_calculations/test_anticommutative_operations.py
import pytest
from calculations import calculate_difference
@pytest.mark.parametrize(
"first_value, second_value, expected_output",
[
(10, 8, 2),
(8, 10, -2),
(-10, -8, -2),
(-8, -10, 2),
]
)
def test_calculate_difference(first_value, second_value, expected_output):
calculation = calculate_difference(first_value, second_value)
assert calculation == expected_output
@pytest.mark.parametrize
has a strictly structured form:
If you run that test, it will run 4 times, each time with different inputs and output:
(venv)$ python -m pytest -v tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference
=============================== test session starts ===============================
platform darwin -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/michael/repos/testdriven/pytest_for_beginners_test_project
plugins: anyio-3.6.1
collected 4 items
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[10-8-2] PASSED [ 25%]
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[8-10--2] PASSED [ 50%]
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[-10--8--2] PASSED [ 75%]
tests/test_calculations/test_anticommutative_operations.py::test_calculate_difference[-8--10-2] PASSED [100%]
================================ 4 passed in 0.01s ================================
It's a good idea to move the Arrange (and consequently Cleanup) step to a separate fixture function when the Arrange step is exactly the same in multiple tests or if it's so complicated that it hurts tests' readability.
A function is marked as a fixture with a @pytest.fixture
decorator.
The old version of TestCalculationEndpoints
had a step for creating a TestClient
in each method.
For example:
# tests/test_endpoints/test_calculation_endpoints.py
def test_return_sum(self):
test_data = {
"first_val": 10,
"second_val": 8
}
client = TestClient(app)
response = client.post("/sum/", json=test_data)
assert response.status_code == 200
assert response.json() == 18
In the advanced_topics branch, you'll see that the method now looks much cleaner:
# tests/test_endpoints/test_calculation_endpoints.py
def test_return_sum(self, test_app):
test_data = {
"first_val": 10,
"second_val": 8
}
response = test_app.post("/sum/", json=test_data)
assert response.status_code == 200
assert response.json() == 18
The second two were left as they were, so you can compare them (don't do that in real-life; it makes no sense).
test_return_sum
now uses a fixture called test_app
that you can see in the conftest.py file:
# tests/conftest.py
import pytest
from starlette.testclient import TestClient
from main import app
@pytest.fixture(scope="module")
def test_app():
client = TestClient(app)
return client
What's going on?
@pytest.fixture()
decorator marks the function test_app
as a fixture. When pytest reads that module, it adds that function to a list of fixtures. Test functions can then use any fixture in that list.TestClient
, so test API calls can be performed.Another important thing to notice is that the function is not passed the fixture itself but a fixture value.
Fixtures are created when first requested by a test, but they are destroyed based on their scope. After the fixture is destroyed, it needs to be evoked again, if required by another test; so, you need to be mindful of the scope with time-expensive fixtures (e.g., API calls).
There are five possible scopes, from the narrowest to the broadest:
Scope | Description |
---|---|
function (default) | The fixture is destroyed at the end of the test. |
class | The fixture is destroyed during the teardown of the last test in the class. |
module | The fixture is destroyed during the teardown of the last test in the module. |
package | The fixture is destroyed during the teardown of the last test in the package. |
session | The fixture is destroyed at the end of the test session. |
To change the scope in the previous example, you just need to set the scope
parameter:
# tests/conftest.py
import pytest
from starlette.testclient import TestClient
from main import app
@pytest.fixture(scope="function") # scope changed
def test_app():
client = TestClient(app)
return client
How important it is to define the smallest possible scope depends on how time-consuming the fixture is. Creating a
TestClient
isn't very time-consuming, so changing the scope doesn't shorten the test run. But, for example, running 10 tests using a fixture that calls an external API can be very time-consuming, so it's probably best to use themodule
scope.
When your production code has to deal with files, your tests will as well.
To avoid interference between multiple test files or even with the rest of the app and the additional cleaning process, it's best to use a unique temporary directory.
In the sample app, we stored all the operations performed on a JSON file for future analysis. Now, since you definitely don't want to alter a production file during test runs, you need to create a separate, temporary JSON file.
The code to be tested can be found in store_calculations.py:
# store_calculations.py
import json
class CalculationsStoreJSON:
def __init__(self, json_file_path):
self.json_file_path = json_file_path
with open(self.json_file_path / "calculations.json", "w") as file:
json.dump([], file)
def add(self, calculation):
with open(self.json_file_path/"calculations.json", "r+") as file:
calculations = json.load(file)
calculations.append(calculation)
file.seek(0)
json.dump(calculations, file)
def list_operation_usages(self, operation):
with open(self.json_file_path / "calculations.json", "r") as file:
calculations = json.load(file)
return [calculation for calculation in calculations if calculation['operation'] == operation]
Notice that upon initializing CalculationsStoreJSON
, you have to provide a json_file_path
, where your JSON file will be stored. This can be any valid path on disk; you pass the path the same way for production code and the tests.
Fortunately, pytest provides a number of built-in fixtures, one of which we can use in this case called tmppath:
# tests/test_advanced/test_calculations_storage.py
from store_calculations import CalculationsStoreJSON
def test_correct_calculations_listed_from_json(tmp_path):
store = CalculationsStoreJSON(tmp_path)
calculation_with_multiplication = {"value_1": 2, "value_2": 4, "operation": "multiplication"}
store.add(calculation_with_multiplication)
assert store.list_operation_usages("multiplication") == [{"value_1": 2, "value_2": 4, "operation": "multiplication"}]
This test checks if upon saving the calculation to a JSON file using the CalculationsStoreJSON.add()
method, we can retrieve a list of certain operations using UserStoreJSON.list_operation_usages()
.
We passed the tmp_path
fixture to this test, which returns a path (pathlib.Path
) object, that points to a temporary directory inside the base directory.
When using tmp_path
, pytest creates a:
It's worth noting that, to help with debugging, pytest creates a new base temporary directory during each test session, while old base directories are removed after 3 sessions.
With monkeypatching, you dynamically modify the behavior of a piece of code at runtime without actually changing the source code.
Although it's not necessarily limited just to testing, in pytest, it's used to modify the behavior of the code part inside the tested unit. It's usually used to replace expensive function calls, like HTTP call to APIs, with some pre-defined dummy behavior that's fast and easy to control.
For example, instead of making a call to a real API to get a response, you return some hardcoded response that's used inside tests.
Let's take a deeper look. In our app, there's a function that returns a fact about some number that's retrieved from a public API:
# number_facts.py
import requests
def get_number_fact(number):
url = f"http://numbersapi.com/{number}?json"
response = requests.get(url)
json_resp = response.json()
if json_resp["found"]:
return json_resp["text"]
return "No fact about this number."
You don't want to call the API during your tests because:
In this case, you want to mock the response, so it returns the part we're interested in without actually making the HTTP request:
# tests/test_advanced/test_number_facts.py
import requests
from number_facts import get_number_fact
class MockedResponse:
def __init__(self, json_body):
self.json_body = json_body
def json(self):
return self.json_body
def mock_get(*args, **kwargs):
return MockedResponse({
"text": "7 is the number of days in a week.",
"found": "true",
})
def test_get_number_fact(monkeypatch):
monkeypatch.setattr(requests, 'get', mock_get)
number = 7
fact = '7 is the number of days in a week.'
assert get_number_fact(number) == fact
A lot is happening here:
monkeypatch.setattr
, we overrode the get
function of the requests
package with our own function, mock_get
. All the calls inside the app code to requests.get
will now actually call mock_get
during the execution of this test.mock_get
function returns a MockedResponse
instance that replaces json_body
with the value we assigned inside the mock_get
function ({'"text": "7 is the number of days in a week.", "found": "true",}
).requests.get("http://numbersapi.com/7?json")
as in the production code (get_number_fact
), a MockedResponse
with a hardcoded fact will be returned.This way, you can still verify the behavior of your function (getting a fact about a number from an API response) without really calling the API.
There's a number of reasons why pytest became a standard in the past few years, most notably:
pytest offers much more than what we covered in this article.
Their documentation includes helpful how-to guides that cover in-depth most of what we skimmed here. They also provide a number of examples.
pytest also comes with an extensive list of plugins, which you can use to extend pytest functionalities.
Here are a few you might find useful:
This article should have helped you understand how the pytest library works and what it's possible to accomplish with it. However, understanding just how pytest works and how testing works are not the same. Learning to write meaningful tests takes practice and understanding of what you expect your code to do.
Original article source at: https://testdriven.io/blog/pytest-for-beginners/
1662686460
This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.
Warning
Tested on PostgreSQL versions >= 10. See tests for more details.
Install with:
pip install pytest-postgresql
You will also need to install psycopg
. See its installation instructions. Note that this plugin requires psycopg
version 3. It is possible to simultaneously install version 3 and version 2 for libraries that require the latter (see those instructions).
Plugin contains three fixtures:
Simply include one of these fixtures into your tests fixture list.
You can also create additional postgresql client and process fixtures if you'd need to:
from pytest_postgresql import factories
postgresql_my_proc = factories.postgresql_proc(
port=None, unixsocketdir='/var/run')
postgresql_my = factories.postgresql('postgresql_my_proc')
Note
Each PostgreSQL process fixture can be configured in a different way than the others through the fixture factory arguments.
Sample test
def test_example_postgres(postgresql):
"""Check main postgresql fixture."""
cur = postgresql.cursor()
cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);")
postgresql.commit()
cur.close()
If you want the database fixture to be automatically populated with your schema there are two ways:
Both are accepting same set of possible loaders:
That function will receive host, port, user, dbname and password kwargs and will have to perform connection to the database inside. However, you'll be able to run SQL files or even trigger programmatically database migrations you have.
Client specific loads the database each test
postgresql_my_with_schema = factories.postgresql(
'postgresql_my_proc',
load=["schemafile.sql", "otherschema.sql", "import.path.to.function", "import.path.to:otherfunction", load_this]
)
Warning
This way, the database will still be dropped each time.
The process fixture performs the load once per test session, and loads the data into the template database. Client fixture then creates test database out of the template database each test, which significantly speeds up the tests.
postgresql_my_proc = factories.postgresql_proc(
load=["schemafile.sql", "otherschema.sql", "import.path.to.function", "import.path.to:otherfunction", load_this]
)
pytest --postgresql-populate-template=path.to.loading_function --postgresql-populate-template=path.to.other:loading_function --postgresql-populate-template=path/to/file.sql
The loading_function from example will receive , and have to commit that. Connecting to already existing postgresql database --------------------------------------------------
Some projects are using already running postgresql servers (ie on docker instances). In order to connect to them, one would be using the postgresql_noproc
fixture.
postgresql_external = factories.postgresql('postgresql_noproc')
By default the postgresql_noproc
fixture would connect to postgresql instance using 5432 port. Standard configuration options apply to it.
These are the configuration options that are working on all levels with the postgresql_noproc
fixture:
You can define your settings in three ways, it's fixture factory argument, command line option and pytest.ini configuration option. You can pick which you prefer, but remember that these settings are handled in the following order:
Fixture factory argument
Command line option
Configuration option in your pytest.ini file
Configuration options
PostgreSQL option | Fixture factory argument | Command line option | pytest.ini option | Noop process fixture | Default |
---|---|---|---|---|---|
Path to executable | executable | --postgresql-exec | postgresql_exec | /usr/lib/postgresql/13/bin/pg_ctl | |
host | host | --postgresql-host | postgresql_host | yes | 127.0.0.1 |
port | port | --postgresql-port | postgresql_port | yes (5432) | random |
postgresql user | user | --postgresql-user | postgresql_user | yes | postgres |
password | password | --postgresql-password | postgresql_password | yes | |
Starting parameters (extra pg_ctl arguments) | startparams | --postgresql-startparams | postgresql_startparams | -w | |
Postgres exe extra arguments (passed via pg_ctl's -o argument) | postgres_options | --postgresql-postgres-options | postgresql_postgres_options | ||
Log filename's prefix | logsprefix | --postgresql-logsprefix | postgresql_logsprefix | ||
Location for unixsockets | unixsocket | --postgresql-unixsocketdir | postgresql_unixsocketdir | $TMPDIR | |
Database name | dbname | --postgresql-dbname | postgresql_dbname | yes, however with xdist an index is being added to name, resulting in test0, test1 for each worker. | test |
Default Schema either in sql files or import path to function that will load it (list of values for each) | load | --postgresql-load | postgresql_load | yes | |
PostgreSQL connection options | options | --postgresql-options | postgresql_options | yes |
Example usage:
pass it as an argument in your own fixture
postgresql_proc = factories.postgresql_proc( port=8888)
use --postgresql-port
command line option when you run your tests
py.test tests --postgresql-port=8888
specify your port as postgresql_port
in your pytest.ini
file.
To do so, put a line like the following under the
[pytest]
section of yourpytest.ini
:[pytest] postgresql_port = 8888
This example shows how to populate database and create an SQLAlchemy's ORM connection:
Sample below is simplified session fixture from pyramid_fullauth tests:
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.pool import NullPool
from zope.sqlalchemy import register
@pytest.fixture
def db_session(postgresql):
"""Session for SQLAlchemy."""
from pyramid_fullauth.models import Base
connection = f'postgresql+psycopg2://{postgresql.info.user}:@{postgresql.info.host}:{postgresql.info.port}/{postgresql.info.dbname}'
engine = create_engine(connection, echo=False, poolclass=NullPool)
pyramid_basemodel.Session = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
pyramid_basemodel.bind_engine(
engine, pyramid_basemodel.Session, should_create=True, should_drop=True)
yield pyramid_basemodel.Session
transaction.commit()
Base.metadata.drop_all(engine)
@pytest.fixture
def user(db_session):
"""Test user fixture."""
from pyramid_fullauth.models import User
from tests.tools import DEFAULT_USER
new_user = User(**DEFAULT_USER)
db_session.add(new_user)
transaction.commit()
return new_user
def test_remove_last_admin(db_session, user):
"""
Sample test checks internal login, but shows usage in tests with SQLAlchemy
"""
user = db_session.merge(user)
user.is_admin = True
transaction.commit()
user = db_session.merge(user)
with pytest.raises(AttributeError):
user.is_admin = False
Note
See the original code at pyramid_fullauth's conftest file. Depending on your needs, that in between code can fire alembic migrations in case of sqlalchemy stack or any other code
It is possible and appears it's used in other libraries for tests, to maintain database state with the use of the pytest-postgresql
database managing functionality:
For this import DatabaseJanitor and use its init and drop methods:
import pytest
from pytest_postgresql.janitor import DatabaseJanitor
@pytest.fixture
def database(postgresql_proc):
# variable definition
janitor = DatabaseJanitor(
postgresql_proc.user,
postgresql_proc.host,
postgresql_proc.port,
"my_test_database",
postgresql_proc.version,
password="secret_password,
):
janitor.init()
yield psycopg2.connect(
dbname="my_test_database",
user=postgresql_proc.user,
password="secret_password",
host=postgresql_proc.host,
port=postgresql_proc.port,
)
janitor.drop()
or use it as a context manager:
import pytest
from pytest_postgresql.janitor import DatabaseJanitor
@pytest.fixture
def database(postgresql_proc):
# variable definition
with DatabaseJanitor(
postgresql_proc.user,
postgresql_proc.host,
postgresql_proc.port,
"my_test_database",
postgresql_proc.version,
password="secret_password,
):
yield psycopg2.connect(
dbname="my_test_database",
user=postgresql_proc.user,
password="secret_password",
host=postgresql_proc.host,
port=postgresql_proc.port,
)
Note
DatabaseJanitor manages the state of the database, but you'll have to create connection to use in test code yourself.
You can optionally pass in a recognized postgresql ISOLATION_LEVEL for additional control.
Note
See DatabaseJanitor usage in python's warehouse test code https://github.com/pypa/warehouse/blob/5d15bfe/tests/conftest.py#L127
To connect to a docker run postgresql and run test on it, use noproc fixtures.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
This will start postgresql in a docker container, however using a postgresql installed locally is not much different.
In tests, make sure that all your tests are using postgresql_noproc fixture like that:
from pytest_postgresql import factories
postgresql_in_docker = factories.postgresql_noproc()
postgresql = factories.postgresql("postgresql_in_docker", dbname="test")
def test_postgres_docker(postgresql):
"""Run test."""
cur = postgresql.cursor()
cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);")
postgresql.commit()
cur.close()
And run tests:
pytest --postgresql-host=172.17.0.2 --postgresql-password=mysecretpassword
If you've got several tests that require common initialisation, you need to define a load and pass it to your custom postgresql process fixture:
import pytest_postgresql.factories
def load_database(**kwargs):
db_connection: connection = psycopg2.connect(**kwargs)
with db_connection.cursor() as cur:
cur.execute("CREATE TABLE stories (id serial PRIMARY KEY, name varchar);")
cur.execute(
"INSERT INTO stories (name) VALUES"
"('Silmarillion'), ('Star Wars'), ('The Expanse'), ('Battlestar Galactica')"
)
db_connection.commit()
postgresql_proc = factories.postgresql_proc(
load=[load_database],
)
postgresql = factories.postgresql(
"postgresql_proc",
)
You can also define your own database name by passing same dbname value to both factories.
The way this will work is that the process fixture will populate template database, which in turn will be used automatically by client fixture to create a test database from scratch. Fast, clean and no dangling transactions, that could be accidentally rolled back.
Same approach will work with noproces fixture, while connecting to already running postgresql instance whether it'll be on a docker machine or running remotely or locally.
Author: ClearcodeHQ
Source code: https://github.com/ClearcodeHQ/pytest-postgresql
License: LGPL-3.0, GPL-3.0 licenses found
#python #postgresql #pytest
1660298460
This articles serves as a guide to testing Flask applications with pytest.
We'll first look at why testing is important for creating maintainable software and what you should focus on when testing. Then, we'll detail how to:
Source: https://testdriven.io
1660291200
Bài viết này phục vụ như một hướng dẫn để kiểm tra các ứng dụng Flask với pytest.
Trước tiên, chúng ta sẽ xem xét lý do tại sao kiểm tra lại quan trọng để tạo ra phần mềm có thể bảo trì và những gì bạn nên tập trung vào khi kiểm tra. Sau đó, chúng tôi sẽ trình bày chi tiết cách:
Đến cuối bài viết này, bạn sẽ có thể:
Nói chung, thử nghiệm giúp đảm bảo rằng ứng dụng của bạn sẽ hoạt động như mong đợi đối với người dùng cuối.
Các dự án phần mềm có phạm vi kiểm tra cao không bao giờ là hoàn hảo, nhưng đó là một chỉ báo ban đầu tốt về chất lượng của phần mềm. Ngoài ra, mã có thể kiểm tra nói chung là dấu hiệu của một kiến trúc phần mềm tốt, đó là lý do tại sao các nhà phát triển nâng cao tính đến kiểm tra trong toàn bộ vòng đời phát triển.
Các bài kiểm tra có thể được xem xét ở ba cấp độ:
Các bài kiểm tra đơn vị kiểm tra chức năng của một đơn vị mã riêng lẻ được tách biệt khỏi các phần phụ thuộc của nó. Chúng là tuyến phòng thủ đầu tiên chống lại các lỗi và sự mâu thuẫn trong cơ sở mã của bạn. Họ kiểm tra từ trong ra ngoài, theo quan điểm của lập trình viên.
Functional tests test multiple components of a software product to make sure the components are working together properly. Typically, these tests focus on functionality that the user will be utilizing. They test from the outside in, from the end user's point of view.
Both unit and functional testing are fundamental parts of the Test-Driven Development (TDD) process.
Testing improves the maintainability of your code.
Maintainability refers to making bug fixes or enhancements to your code or to another developer needing to update your code at some point in the future.
Testing should be combined with a Continuous Integration (CI) process to ensure that your tests are constantly being executed, ideally on each commit to your repository. A solid suite of tests can be critical to catching defects quickly and early in the development process before your end users come across them in production.
What should you test?
Again, unit tests should focus on testing small units of code in isolation.
For example, in a Flask app, you may use unit tests to test:
Functional tests, meanwhile, should focus on how the view functions operate.
For example:
Focus on testing scenarios that the end user will interact with. The experience that the users of your product have is paramount!
pytest is a test framework for Python used to write, organize, and run test cases. After setting up your basic test structure, pytest makes it really easy to write tests and provides a lot of flexibility for running the tests. pytest satisfies the key aspects of a good test environment:
pytest is incredible! I highly recommend using it for testing any application or script written in Python.
If you're interested in really learning all the different aspects of pytest, I highly recommend the Python Testing with pytest book by Brian Okken.
Python has a built-in test framework called unittest, which is a great choice for testing as well. The unittest module is inspired by the xUnit test framework.
It provides the following:
assert
statements for performing checksThe main differences between pytest and unittest:
Feature | pytest | unittest |
---|---|---|
Installation | Third-party library | Part of the core standard library |
Test setup and teardown | fixtures | setUp() and tearDown() methods |
Assertion Format | Built-in assert | assert* style methods |
Structure | Functional | Object-oriented |
Either framework is good for testing a Flask project. However, I prefer pytest since it:
assert
statement, which is far more readable and easier to remember compared to the assertSomething
methods -- like assertEquals
, assertTrue
, and assertContains
-- in unittest.I like to organize all the test cases in a separate "tests" folder at the same level as the application files.
Additionally, I really like differentiating between unit and functional tests by splitting them out as separate sub-folders. This structure gives you the flexibility to easily run just the unit tests (or just the functional tests, for that matter).
Here's an example of the structure of the "tests" directory:
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
And, here's how the "tests" folder fits into a typical Flask project with blueprints:
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
The first test that we're going to write is a unit test for project/models.py, which contains the SQLAlchemy interface to the database.
This test doesn't access the underlying database; it only checks the interface class used by SQLAlchemy.
Since this test is a unit test, it should be implemented in tests/unit/test_models.py:
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Let's take a closer look at this test.
After the import, we start with a description of what the test does:
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
Why include so many comments for a test function?
I've found that tests are one of the most difficult aspects of a project to maintain. Often, the code (including the level of comments) for test suites is nowhere near the level of quality as the code being tested.
A common structure used to describe what each test function does helps with maintainability by making it easier for a someone (another developer, your future self) to quickly understand the purpose of each test.
A common practice is to use the GIVEN-WHEN-THEN structure:
- GIVEN - what are the initial conditions for the test?
- WHEN - what is occurring that needs to be tested?
- THEN - what is the expected response?
For more, review the GivenWhenThen article by Martin Fowler and the Python Testing with pytest book by Brian Okken.
Next, we have the actual test:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
After creating a new user
with valid arguments to the constructor, the properties of the user
are checked to make sure it was created properly.
The second test that we're going to write is a functional test for project/recipes/routes.py, which contains the view functions for the recipes
blueprint.
Since this test is a functional test, it should be implemented in tests/functional/test_recipes.py:
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
This project uses the Application Factory Pattern to create the Flask application. Therefore, the create_app()
function needs to first be imported:
from project import create_app
The test function, test_home_page()
, starts with the GIVEN-WHEN-THEN description of what the test does. Next, a Flask application (flask_app
) is created:
flask_app = create_app('flask_test.cfg')
In order to create the proper environment for testing, Flask provides a test_client helper. This creates a test version of our Flask application, which we used to make a GET call to the '/' URL. We then check that the status code returned is OK (200) and that the response contained the following strings:
These checks match with what we expect the user to see when we navigate to the '/' URL:
An example of an off-nominal functional test would be to utilize an invalid HTTP method (POST) when accessing the '/' URL:
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
This test checks that a POST request to the '/' URL results in an error code of 405 (Method Not Allowed) being returned.
Take a second to review the two functional tests... do you see some duplicate code between these two test functions? Do you see a lot of code for initializing the state needed by the test functions? We can use fixtures to address these issues.
Fixtures initialize tests to a known state in order to run tests in a predictable and repeatable manner.
The classic approach to writing and executing tests follows the the xUnit type of test framework, where each test runs as follows:
SetUp()
TearDown()
The SetUp()
and TearDown()
methods always run for each unit test within a test suite. This approach results in the same initial state for each test within a test suite, which doesn't provide much flexibility.
The test fixture approach provides much greater flexibility than the classic Setup/Teardown approach.
pytest-flask facilitates testing Flask apps by providing a set of common fixtures used for testing Flask apps. This library is not used in this tutorial, as I want to show how to create the fixtures that help support testing Flask apps.
First, fixtures are defined as functions (that should have a descriptive names for their purpose).
Second, multiple fixtures can be run to set the initial state for a test function. In fact, fixtures can even call other fixtures! So, you can compose them together to create the required state.
Finally, fixtures can be run with different scopes:
function
- run once per test function (default scope)class
- run once per test classmodule
- run once per module (e.g., a test file)session
- run once per sessionFor example, if you have a fixture with module scope, that fixture will run once (and only once) before the test functions in the module run.
Fixtures should be created in tests/conftest.py.
To help facilitate testing the User
class in project/models.py, we can add a fixture to tests/conftest.py that is used to create a User
object to test:
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
The @pytest.fixture
decorator specifies that this function is a fixture with module
-level scope. In other words, this fixture will be called one per test module.
This fixture, new_user
, creates an instance of User
using valid arguments to the constructor. user
is then passed to the test function (return user
).
We can simplify the test_new_user()
test function from earlier by using the new_user
fixture in tests/unit/test_models.py:
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
By using a fixture, the test function is reduced to the assert
statements that perform the checks against the User
object.
To help facilitate testing all the view functions in the Flask project, a fixture can be created in tests/conftest.py:
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
This fixture creates the test client using a context manager:
with flask_app.test_client() as testing_client:
Next, the Application context is pushed onto the stack for use by the test functions:
with flask_app.app_context():
yield testing_client # this is where the testing happens!
To learn more about the Application context in Flask, refer to the following blog posts:
The yield testing_client
statement means that execution is being passed to the test functions.
We can simplify the functional tests from earlier with the test_client
fixture in tests/functional/test_recipes.py:
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Did you notice that much of the duplicate code is gone? By utilizing the test_client
fixture, each test function is simplified down to the HTTP call (GET or POST) and the assert that checks the response.
I really find that using fixtures helps to focus the test function on actually doing the testing, as the test initialization is handled in the fixture.
To run the tests, navigate to the top-level folder of the Flask project and run pytest through the Python interpreter:
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
Why run pytest through the Python interpreter?
The main advantage is that the current directory (e.g., the top-level folder of the Flask project) is added to the system path. This avoids any problems with pytest not being able to find the source code.
pytest sẽ tìm kiếm đệ quy thông qua cấu trúc dự án của bạn để tìm các tệp Python bắt đầu test_*.py
và sau đó chạy các chức năng bắt đầu bằng test_
các tệp đó. Không cần cấu hình để xác định vị trí của các tệp thử nghiệm!
Để xem thêm chi tiết về các thử nghiệm đã được chạy:
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
Nếu bạn chỉ muốn chạy một loại thử nghiệm cụ thể:
python -m pytest tests/unit/
python -m pytest tests/functional/
Để thực sự biết khi nào test_client()
đồ đạc được chạy, pytest có thể cung cấp cấu trúc lệnh gọi của đồ đạc và kiểm tra với --setup-show
đối số:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
Vật test_client
cố định có phạm vi 'mô-đun', vì vậy nó được thực thi trước hai lần kiểm tra _with_fixture trong tests / function / test_recipes.py .
Nếu bạn thay đổi phạm vi của test_client
vật cố định thành phạm vi 'chức năng':
@pytest.fixture(scope='function')
Sau đó, lịch test_client
thi đấu sẽ chạy trước mỗi trong hai bài kiểm tra _with_fixture :
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Vì chúng tôi muốn test_client
vật cố định chỉ được chạy một lần trong mô-đun này, hãy hoàn nguyên phạm vi trở lại 'mô-đun'.
Khi phát triển các bài kiểm tra, thật tuyệt khi hiểu được mức độ thực sự của mã nguồn được kiểm tra. Khái niệm này được gọi là vùng phủ mã .
Tôi cần phải rất rõ ràng rằng việc có một tập hợp các bài kiểm tra bao gồm 100% mã nguồn hoàn toàn không phải là một dấu hiệu cho thấy mã được kiểm tra đúng cách.
Số liệu này có nghĩa là có rất nhiều bài kiểm tra và rất nhiều nỗ lực đã được đưa vào việc phát triển các bài kiểm tra. Chất lượng của các bài kiểm tra vẫn cần được kiểm tra bằng cách kiểm tra mã.
Điều đó nói rằng, cực đoan khác, nơi đây là một tập hợp tối thiểu (hoặc không có!) Các bài kiểm tra, còn tệ hơn nhiều!
Có hai gói tuyệt vời có sẵn để xác định độ bao phủ của mã: cover.py và pytest-cov .
Tôi khuyên bạn nên sử dụng pytest-cov dựa trên sự tích hợp liền mạch của nó với pytest. Nó được xây dựng dựa trên cover.py, từ Ned Batchelder, là tiêu chuẩn trong phạm vi mã cho Python.
Chạy pytest khi kiểm tra độ phủ của mã yêu cầu --cov
đối số chỉ ra gói Python nào ( project
trong cấu trúc dự án Flask) để kiểm tra độ phủ của:
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
Ngay cả khi kiểm tra độ phủ của mã, các đối số vẫn có thể được chuyển tới pytest:
(venv)$ python -m pytest --setup-show --cov=project
Bài viết này phục vụ như một hướng dẫn để kiểm tra các ứng dụng Flask, tập trung vào:
Nguồn: https://testdriven.io
1660283940
Эта статья служит руководством по тестированию приложений Flask с помощью pytest.
Сначала мы рассмотрим, почему тестирование важно для создания поддерживаемого программного обеспечения и на чем следует сосредоточиться при тестировании. Затем мы подробно расскажем, как:
К концу этой статьи вы сможете:
В общем, тестирование помогает убедиться, что ваше приложение будет работать так, как ожидается для ваших конечных пользователей.
Программные проекты с высоким тестовым покрытием никогда не бывают идеальными, но это хороший начальный показатель качества программного обеспечения. Кроме того, тестируемый код обычно является признаком хорошей архитектуры программного обеспечения, поэтому опытные разработчики учитывают тестирование на протяжении всего жизненного цикла разработки.
Тесты можно рассматривать на трех уровнях:
Модульные тесты проверяют функциональность отдельного модуля кода, изолированного от его зависимостей. Это первая линия защиты от ошибок и несоответствий в вашей кодовой базе. Они тестируют изнутри наружу, с точки зрения программиста.
Функциональные тесты проверяют несколько компонентов программного продукта, чтобы убедиться, что компоненты работают вместе должным образом. Как правило, эти тесты сосредоточены на функциональности, которую будет использовать пользователь. Они проверяют снаружи внутрь, с точки зрения конечного пользователя.
Как модульное, так и функциональное тестирование являются фундаментальными частями процесса разработки через тестирование (TDD) .
Тестирование улучшает ремонтопригодность вашего кода.
Под ремонтопригодностью понимается внесение исправлений ошибок или усовершенствований в ваш код или необходимость обновления кода другим разработчиком в какой-то момент в будущем.
Тестирование следует сочетать с процессом непрерывной интеграции (CI), чтобы убедиться, что ваши тесты постоянно выполняются, в идеале при каждой фиксации в вашем репозитории. Надежный набор тестов может иметь решающее значение для быстрого и раннего выявления дефектов в процессе разработки, прежде чем ваши конечные пользователи столкнутся с ними в рабочей среде.
Что вы должны проверить?
Опять же, модульные тесты должны быть сосредоточены на изолированном тестировании небольших блоков кода.
Например, в приложении Flask вы можете использовать модульные тесты для проверки:
Тем временем функциональные тесты должны быть сосредоточены на том, как работают функции представления.
Например:
Сосредоточьтесь на сценариях тестирования, с которыми будет взаимодействовать конечный пользователь. Опыт, который есть у пользователей вашего продукта, имеет первостепенное значение!
pytest — это тестовая среда для Python, используемая для написания, организации и запуска тестовых случаев. После настройки базовой структуры теста pytest упрощает написание тестов и обеспечивает большую гибкость для запуска тестов. pytest удовлетворяет ключевым аспектам хорошей тестовой среды:
pytest невероятен! Я настоятельно рекомендую использовать его для тестирования любого приложения или скрипта, написанного на Python.
Если вы действительно заинтересованы в изучении всех различных аспектов pytest, я настоятельно рекомендую книгу Брайана Оккена « Тестирование Python с помощью pytest».
Python имеет встроенную тестовую среду под названием unittest , которая также является отличным выбором для тестирования. Модуль unittest вдохновлен тестовой средой xUnit .
Он обеспечивает следующее:
assert
операторов для выполнения проверокОсновные отличия pytest от unittest:
Особенность | питест | модульный тест |
---|---|---|
Монтаж | Сторонняя библиотека | Часть основной стандартной библиотеки |
Тестовая установка и демонтаж | светильники | setUp() и tearDown() методы |
Формат утверждения | Встроенное утверждение | assert* методы стиля |
Структура | Функциональный | Объектно-ориентированный |
Любой фреймворк хорош для тестирования проекта Flask. Однако я предпочитаю pytest, поскольку он:
assert
оператор, который гораздо более удобочитаем и легче запоминается по сравнению с assertSomething
методами -- такими как assertEquals
, assertTrue
и assertContains
-- в unittest.Мне нравится организовывать все тестовые примеры в отдельной папке «тесты» на том же уровне, что и файлы приложения.
Кроме того, мне очень нравится различать модульные и функциональные тесты, разделяя их на отдельные подпапки. Эта структура дает вам возможность легко запускать только модульные тесты (или, если уж на то пошло, только функциональные тесты).
Вот пример структуры каталога «tests»:
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
А вот как папка «tests» вписывается в типичный проект Flask с чертежами :
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
Первый тест, который мы собираемся написать, — это модульный тест для проекта/models.py , который содержит интерфейс SQLAlchemy для базы данных.
Этот тест не обращается к базовой базе данных; он проверяет только класс интерфейса, используемый SQLAlchemy.
Поскольку этот тест является юнит-тестом, он должен быть реализован в файлеtests/unit/test_models.py :
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Давайте подробнее рассмотрим этот тест.
После импорта начнем с описания того, что делает тест:
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
Зачем включать так много комментариев для тестовой функции?
Я обнаружил, что тесты — один из самых сложных аспектов проекта в обслуживании. Часто код (включая уровень комментариев) для наборов тестов далеко не соответствует уровню качества тестируемого кода.
Общая структура, используемая для описания того, что делает каждая тестовая функция, помогает упростить сопровождение, облегчая кому-то (другому разработчику, вам в будущем) быстрое понимание цели каждого теста.
Обычной практикой является использование структуры GIVEN-WHEN-THEN:
- ДАННО - каковы начальные условия теста?
- КОГДА - что происходит, что необходимо проверить?
- ТО - каков ожидаемый ответ?
Для получения дополнительной информации ознакомьтесь со статьей GivenWhenThen Мартина Фаулера и книгой Брайана Оккена « Тестирование Python с помощью pytest».
Далее у нас есть фактический тест:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
После создания нового конструктора user
с допустимыми аргументами свойства user
проверяются, чтобы убедиться, что он был создан правильно.
Второй тест, который мы собираемся написать, — это функциональный тест для проекта/рецептов/маршрутов.py , который содержит функции просмотра для recipes
схемы.
Поскольку этот тест является функциональным тестом, он должен быть реализован в файлеtests/functional/test_recipes.py :
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
Этот проект использует шаблон фабрики приложений для создания приложения Flask. Поэтому create_app()
функцию нужно сначала импортировать:
from project import create_app
Тестовая функция test_home_page()
начинается с ДАННОГО-КОГДА-ТОГДА описания того, что делает тест. Затем создается приложение Flask ( flask_app
):
flask_app = create_app('flask_test.cfg')
Чтобы создать подходящую среду для тестирования, Flask предоставляет хелпер test_client . Это создает тестовую версию нашего приложения Flask, которое мы использовали для вызова GET для URL-адреса «/». Затем мы проверяем, что возвращенный код состояния ОК (200) и что ответ содержит следующие строки:
Эти проверки соответствуют тому, что мы ожидаем, что пользователь увидит, когда мы перейдем к URL-адресу «/»:
Примером нестандартного функционального теста может быть использование недопустимого метода HTTP (POST) при доступе к URL-адресу «/»:
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Этот тест проверяет, что запрос POST к URL-адресу «/» приводит к возврату кода ошибки 405 (метод не разрешен).
Найдите секунду, чтобы просмотреть два функциональных теста... вы видите дублирующийся код между этими двумя тестовыми функциями? Вы видите много кода для инициализации состояния, необходимого тестовым функциям? Мы можем использовать приспособления для решения этих проблем.
Фикстуры инициализируют тесты до известного состояния, чтобы запускать тесты предсказуемым и воспроизводимым образом.
Классический подход к написанию и выполнению тестов соответствует тестовой среде типа xUnit , где каждый тест выполняется следующим образом:
SetUp()
TearDown()
Методы SetUp()
и TearDown()
всегда выполняются для каждого модульного теста в наборе тестов. Этот подход приводит к одинаковому начальному состоянию для каждого теста в наборе тестов, что не обеспечивает большой гибкости.
Подход с тестовым приспособлением обеспечивает гораздо большую гибкость, чем классический подход с установкой/разборкой.
pytest-flask облегчает тестирование приложений Flask, предоставляя набор общих приспособлений, используемых для тестирования приложений Flask. Эта библиотека не используется в этом руководстве, так как я хочу показать , как создавать фикстуры, которые помогают поддерживать тестирование приложений Flask.
Во-первых, фикстуры определяются как функции (которые должны иметь описательные имена для своего назначения).
Во-вторых, можно запустить несколько приборов для установки начального состояния тестовой функции. На самом деле, приборы могут даже вызывать другие приборы! Таким образом, вы можете составить их вместе, чтобы создать необходимое состояние.
Наконец, фикстуры можно запускать с разными областями действия:
function
- запускать один раз для каждой тестовой функции (область действия по умолчанию)class
- запускать один раз на тестовый классmodule
- запускать один раз для каждого модуля (например, тестовый файл)session
- запускать один раз за сессиюНапример, если у вас есть фикстура с областью действия модуля, эта фикстура запустится один раз (и только один раз) до того, как запустятся тестовые функции в модуле.
Фикстуры должны быть созданы в файлеtests/conftest.py .
Чтобы облегчить тестирование User
класса в project/models.py , мы можем добавить фикстуру в тесты/conftest.py , которая используется для создания User
объекта для тестирования:
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
Декоратор указывает, что эта функция @pytest.fixture
является фикстурой с module
областью действия -level. Другими словами, это приспособление будет вызываться по одному на тестовый модуль.
Это приспособление new_user
создает экземпляр с User
использованием допустимых аргументов для конструктора. user
затем передается тестовой функции ( return user
).
Мы можем упростить test_new_user()
тестовую функцию, используя фикстуру new_user
в тестах/юнит/test_models.py :
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
При использовании фикстуры функция тестирования сводится к assert
операторам, выполняющим проверки User
объекта.
Чтобы упростить тестирование всех функций представления в проекте Flask, в файле testings/conftest.py можно создать фикстуру :
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Это приспособление создает тестовый клиент с помощью менеджера контекста:
with flask_app.test_client() as testing_client:
Затем контекст приложения помещается в стек для использования тестовыми функциями:
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Чтобы узнать больше о контексте приложения в Flask, обратитесь к следующим сообщениям в блоге:
Оператор yield testing_client
означает, что выполнение передается тестовым функциям.
Мы можем упростить функциональные тесты с помощью фикстуры test_client
в тестах/functional/test_recipes.py :
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Вы заметили, что большая часть повторяющегося кода исчезла? Используя test_client
фикстуру, каждая тестовая функция упрощается до HTTP-вызова (GET или POST) и утверждения, которое проверяет ответ.
Я действительно считаю, что использование фикстур помогает сфокусировать тестовую функцию на фактическом выполнении тестирования, поскольку инициализация теста обрабатывается в фикстуре.
Чтобы запустить тесты, перейдите в папку верхнего уровня проекта Flask и запустите pytest через интерпретатор Python:
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
Зачем запускать pytest через интерпретатор Python?
Основное преимущество заключается в том, что текущая директория (например, папка верхнего уровня проекта Flask) добавляется к системному пути. Это позволяет избежать проблем с тем, что pytest не может найти исходный код.
pytest рекурсивно просматривает структуру вашего проекта, чтобы найти файлы Python, которые начинаются с, test_*.py
а затем запускает функции, которые начинаются с test_
в этих файлах. Нет необходимости в настройке, чтобы определить, где находятся тестовые файлы!
Чтобы увидеть более подробную информацию о проведенных тестах:
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
Если вы хотите запустить только определенный тип теста:
python -m pytest tests/unit/
python -m pytest tests/functional/
Чтобы действительно понять, когда test_client()
запускается фикстура, pytest может предоставить структуру вызова фикстур и тестов с --setup-show
аргументом:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
Фикстура test_client
имеет область действия «модуль», поэтому она выполняется до двух тестов _with_fixture в файлеtests /functional/test_recipes.py .
Если вы измените область test_client
приспособления на область «функции»:
@pytest.fixture(scope='function')
Затем test_client
фикстура будет запускаться перед каждым из двух тестов _with_fixture :
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Поскольку мы хотим, чтобы test_client
фикстура запускалась в этом модуле только один раз, верните область действия обратно в «модуль».
При разработке тестов полезно получить представление о том, какая часть исходного кода фактически тестируется. Эта концепция известна как покрытие кода .
Я должен четко понимать, что наличие набора тестов, покрывающих 100% исходного кода, ни в коем случае не является показателем того, что код протестирован должным образом.
Эта метрика означает, что было проведено много тестов и много усилий было вложено в их разработку. Качество тестов по-прежнему необходимо проверять с помощью проверки кода.
При этом другая крайность, когда это минимальный набор (или ни одного!) тестов, гораздо хуже!
Есть два отличных пакета для определения покрытия кода: coverage.py и pytest-cov .
Я рекомендую использовать pytest-cov на основе его бесшовной интеграции с pytest. Он построен на основе Cover.py от Неда Бэтчелдера, который является стандартом покрытия кода для Python.
Запуск pytest при проверке покрытия кода требует --cov
аргумента, чтобы указать, какой пакет Python ( project
в структуре проекта Flask) проверять покрытие:
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
Даже при проверке покрытия кода аргументы все равно могут быть переданы в pytest:
(venv)$ python -m pytest --setup-show --cov=project
Эта статья послужила руководством по тестированию приложений Flask с акцентом на:
Источник: https://testdriven.io
1660276620
本文作為使用 pytest 測試 Flask 應用程序的指南。
我們將首先看看為什麼測試對於創建可維護的軟件很重要,以及在測試時應該關注什麼。然後,我們將詳細說明如何:
在本文結束時,您將能夠:
一般來說,測試有助於確保您的應用程序能夠按預期為最終用戶工作。
具有高測試覆蓋率的軟件項目從來都不是完美的,但它是軟件質量的良好初始指標。此外,可測試代碼通常是良好軟件架構的標誌,這就是高級開發人員在整個開發生命週期中考慮測試的原因。
可以從三個層面考慮測試:
單元測試測試與其依賴項隔離的單個代碼單元的功能。它們是防止代碼庫中的錯誤和不一致的第一道防線。他們從程序員的角度從內到外進行測試。
Functional tests test multiple components of a software product to make sure the components are working together properly. Typically, these tests focus on functionality that the user will be utilizing. They test from the outside in, from the end user's point of view.
Both unit and functional testing are fundamental parts of the Test-Driven Development (TDD) process.
Testing improves the maintainability of your code.
Maintainability refers to making bug fixes or enhancements to your code or to another developer needing to update your code at some point in the future.
Testing should be combined with a Continuous Integration (CI) process to ensure that your tests are constantly being executed, ideally on each commit to your repository. A solid suite of tests can be critical to catching defects quickly and early in the development process before your end users come across them in production.
What should you test?
Again, unit tests should focus on testing small units of code in isolation.
For example, in a Flask app, you may use unit tests to test:
Functional tests, meanwhile, should focus on how the view functions operate.
For example:
Focus on testing scenarios that the end user will interact with. The experience that the users of your product have is paramount!
pytest is a test framework for Python used to write, organize, and run test cases. After setting up your basic test structure, pytest makes it really easy to write tests and provides a lot of flexibility for running the tests. pytest satisfies the key aspects of a good test environment:
pytest is incredible! I highly recommend using it for testing any application or script written in Python.
If you're interested in really learning all the different aspects of pytest, I highly recommend the Python Testing with pytest book by Brian Okken.
Python has a built-in test framework called unittest, which is a great choice for testing as well. The unittest module is inspired by the xUnit test framework.
It provides the following:
assert
statements for performing checksThe main differences between pytest and unittest:
Feature | pytest | unittest |
---|---|---|
Installation | Third-party library | Part of the core standard library |
Test setup and teardown | fixtures | setUp() and tearDown() methods |
Assertion Format | Built-in assert | assert* style methods |
Structure | Functional | Object-oriented |
Either framework is good for testing a Flask project. However, I prefer pytest since it:
assert
statement, which is far more readable and easier to remember compared to the assertSomething
methods -- like assertEquals
, assertTrue
, and assertContains
-- in unittest.I like to organize all the test cases in a separate "tests" folder at the same level as the application files.
Additionally, I really like differentiating between unit and functional tests by splitting them out as separate sub-folders. This structure gives you the flexibility to easily run just the unit tests (or just the functional tests, for that matter).
Here's an example of the structure of the "tests" directory:
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
And, here's how the "tests" folder fits into a typical Flask project with blueprints:
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
The first test that we're going to write is a unit test for project/models.py, which contains the SQLAlchemy interface to the database.
This test doesn't access the underlying database; it only checks the interface class used by SQLAlchemy.
Since this test is a unit test, it should be implemented in tests/unit/test_models.py:
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Let's take a closer look at this test.
After the import, we start with a description of what the test does:
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
Why include so many comments for a test function?
I've found that tests are one of the most difficult aspects of a project to maintain. Often, the code (including the level of comments) for test suites is nowhere near the level of quality as the code being tested.
A common structure used to describe what each test function does helps with maintainability by making it easier for a someone (another developer, your future self) to quickly understand the purpose of each test.
A common practice is to use the GIVEN-WHEN-THEN structure:
- GIVEN - what are the initial conditions for the test?
- WHEN - what is occurring that needs to be tested?
- THEN - what is the expected response?
For more, review the GivenWhenThen article by Martin Fowler and the Python Testing with pytest book by Brian Okken.
Next, we have the actual test:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
After creating a new user
with valid arguments to the constructor, the properties of the user
are checked to make sure it was created properly.
The second test that we're going to write is a functional test for project/recipes/routes.py, which contains the view functions for the recipes
blueprint.
Since this test is a functional test, it should be implemented in tests/functional/test_recipes.py:
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
This project uses the Application Factory Pattern to create the Flask application. Therefore, the create_app()
function needs to first be imported:
from project import create_app
The test function, test_home_page()
, starts with the GIVEN-WHEN-THEN description of what the test does. Next, a Flask application (flask_app
) is created:
flask_app = create_app('flask_test.cfg')
In order to create the proper environment for testing, Flask provides a test_client helper. This creates a test version of our Flask application, which we used to make a GET call to the '/' URL. We then check that the status code returned is OK (200) and that the response contained the following strings:
These checks match with what we expect the user to see when we navigate to the '/' URL:
An example of an off-nominal functional test would be to utilize an invalid HTTP method (POST) when accessing the '/' URL:
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
This test checks that a POST request to the '/' URL results in an error code of 405 (Method Not Allowed) being returned.
Take a second to review the two functional tests... do you see some duplicate code between these two test functions? Do you see a lot of code for initializing the state needed by the test functions? We can use fixtures to address these issues.
Fixtures initialize tests to a known state in order to run tests in a predictable and repeatable manner.
The classic approach to writing and executing tests follows the the xUnit type of test framework, where each test runs as follows:
SetUp()
TearDown()
The SetUp()
and TearDown()
methods always run for each unit test within a test suite. This approach results in the same initial state for each test within a test suite, which doesn't provide much flexibility.
The test fixture approach provides much greater flexibility than the classic Setup/Teardown approach.
pytest-flask facilitates testing Flask apps by providing a set of common fixtures used for testing Flask apps. This library is not used in this tutorial, as I want to show how to create the fixtures that help support testing Flask apps.
First, fixtures are defined as functions (that should have a descriptive names for their purpose).
Second, multiple fixtures can be run to set the initial state for a test function. In fact, fixtures can even call other fixtures! So, you can compose them together to create the required state.
Finally, fixtures can be run with different scopes:
function
- run once per test function (default scope)class
- run once per test classmodule
- run once per module (e.g., a test file)session
- run once per sessionFor example, if you have a fixture with module scope, that fixture will run once (and only once) before the test functions in the module run.
Fixtures should be created in tests/conftest.py.
To help facilitate testing the User
class in project/models.py, we can add a fixture to tests/conftest.py that is used to create a User
object to test:
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
The @pytest.fixture
decorator specifies that this function is a fixture with module
-level scope. In other words, this fixture will be called one per test module.
This fixture, new_user
, creates an instance of User
using valid arguments to the constructor. user
is then passed to the test function (return user
).
We can simplify the test_new_user()
test function from earlier by using the new_user
fixture in tests/unit/test_models.py:
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
By using a fixture, the test function is reduced to the assert
statements that perform the checks against the User
object.
To help facilitate testing all the view functions in the Flask project, a fixture can be created in tests/conftest.py:
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
This fixture creates the test client using a context manager:
with flask_app.test_client() as testing_client:
Next, the Application context is pushed onto the stack for use by the test functions:
with flask_app.app_context():
yield testing_client # this is where the testing happens!
To learn more about the Application context in Flask, refer to the following blog posts:
The yield testing_client
statement means that execution is being passed to the test functions.
We can simplify the functional tests from earlier with the test_client
fixture in tests/functional/test_recipes.py:
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Did you notice that much of the duplicate code is gone? By utilizing the test_client
fixture, each test function is simplified down to the HTTP call (GET or POST) and the assert that checks the response.
I really find that using fixtures helps to focus the test function on actually doing the testing, as the test initialization is handled in the fixture.
To run the tests, navigate to the top-level folder of the Flask project and run pytest through the Python interpreter:
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
Why run pytest through the Python interpreter?
The main advantage is that the current directory (e.g., the top-level folder of the Flask project) is added to the system path. This avoids any problems with pytest not being able to find the source code.
pytest will recursively search through your project structure to find the Python files that start with test_*.py
and then run the functions that start with test_
in those files. There is no configuration needed to identify where the test files are located!
To see more details on the tests that were run:
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
If you only want to run a specific type of test:
python -m pytest tests/unit/
python -m pytest tests/functional/
To really get a sense of when the test_client()
fixture is run, pytest can provide a call structure of the fixtures and tests with the --setup-show
argument:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
The test_client
fixture has a 'module' scope, so it's executed prior to the two _with_fixture tests in tests/functional/test_recipes.py.
If you change the scope of the test_client
fixture to a 'function' scope:
@pytest.fixture(scope='function')
Then the test_client
fixture will run prior to each of the two _with_fixture tests:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Since we want the test_client
fixture to only be run once in this module, revert the scope back to 'module'.
When developing tests, it's nice to get an understanding of how much of the source code is actually tested. This concept is known as code coverage.
I need to be very clear that having a set of tests that covers 100% of the source code is by no means an indicator that the code is properly tested.
This metric means that there are a lot of tests and a lot of effort has been put into developing the tests. The quality of the tests still needs to be checked by code inspection.
That said, the other extreme, where this is a minimal set (or none!) of tests, is much worse!
有兩個優秀的包可用於確定代碼覆蓋率:coverage.py和pytest-cov。
我推薦使用 pytest-cov,因為它與 pytest 無縫集成。它建立在 Ned Batchelder 的 coverage.py 之上,這是 Python 代碼覆蓋率的標準。
在檢查代碼覆蓋率時運行 pytest 需要--cov
參數來指示哪個 Python 包(project
在 Flask 項目結構中)來檢查以下內容的覆蓋率:
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
即使在檢查代碼覆蓋率時,仍然可以將參數傳遞給 pytest:
(venv)$ python -m pytest --setup-show --cov=project
本文作為測試 Flask 應用程序的指南,重點關注:
1660269060
Cet article sert de guide pour tester les applications Flask avec pytest.
Nous verrons d'abord pourquoi les tests sont importants pour créer un logiciel maintenable et sur quoi vous devez vous concentrer lors des tests. Ensuite, nous détaillerons comment :
À la fin de cet article, vous serez en mesure de :
En général, les tests permettent de s'assurer que votre application fonctionnera comme prévu pour vos utilisateurs finaux.
Les projets logiciels avec une couverture de test élevée ne sont jamais parfaits, mais c'est un bon indicateur initial de la qualité du logiciel. De plus, un code testable est généralement le signe d'une bonne architecture logicielle, c'est pourquoi les développeurs avancés prennent en compte les tests tout au long du cycle de développement.
Les tests peuvent être envisagés à trois niveaux :
Les tests unitaires testent la fonctionnalité d'une unité de code individuelle isolée de ses dépendances. Ils constituent la première ligne de défense contre les erreurs et les incohérences dans votre base de code. Ils testent de l'intérieur, du point de vue du programmeur.
Les tests fonctionnels testent plusieurs composants d'un produit logiciel pour s'assurer que les composants fonctionnent correctement ensemble. En règle générale, ces tests se concentrent sur les fonctionnalités que l'utilisateur utilisera. Ils testent de l'extérieur vers l'intérieur, du point de vue de l'utilisateur final.
Les tests unitaires et fonctionnels sont des éléments fondamentaux du processus de développement piloté par les tests (TDD) .
Les tests améliorent la maintenabilité de votre code.
La maintenabilité consiste à apporter des corrections de bogues ou des améliorations à votre code ou à un autre développeur ayant besoin de mettre à jour votre code à un moment donné dans le futur.
Les tests doivent être combinés à un processus d' intégration continue (CI) pour garantir que vos tests sont constamment exécutés, idéalement à chaque validation de votre référentiel. Une suite solide de tests peut être essentielle pour détecter les défauts rapidement et tôt dans le processus de développement avant que vos utilisateurs finaux ne les rencontrent en production.
Que devriez-vous tester ?
Encore une fois, les tests unitaires doivent se concentrer sur le test de petites unités de code de manière isolée.
Par exemple, dans une application Flask, vous pouvez utiliser des tests unitaires pour tester :
Les tests fonctionnels, quant à eux, doivent se concentrer sur le fonctionnement des fonctions de la vue.
Par exemple:
Concentrez-vous sur les scénarios de test avec lesquels l'utilisateur final interagira. L'expérience qu'ont les utilisateurs de votre produit est primordiale !
pytest est un framework de test pour Python utilisé pour écrire, organiser et exécuter des cas de test. Après avoir configuré votre structure de test de base, pytest facilite l'écriture de tests et offre une grande flexibilité pour exécuter les tests. pytest satisfait les aspects clés d'un bon environnement de test :
pytest est incroyable! Je recommande fortement de l'utiliser pour tester toute application ou script écrit en Python.
Si vous souhaitez vraiment apprendre tous les différents aspects de pytest, je vous recommande vivement le livre Python Testing with pytest de Brian Okken.
Python a un framework de test intégré appelé unittest , qui est également un excellent choix pour les tests. Le module unittest est inspiré du framework de test xUnit .
Il fournit les éléments suivants :
assert
instructions pour effectuer des vérificationsLes principales différences entre pytest et unittest :
Caractéristique | pytest | Test de l'unité |
---|---|---|
Installation | Bibliothèque tierce | Fait partie de la bibliothèque standard de base |
Tester la configuration et le démontage | agencements | setUp() et tearDown() méthodes |
Format d'assertion | Affirmation intégrée | assert* méthodes de style |
Structure | Fonctionnel | Orienté objet |
L'un ou l'autre cadre est bon pour tester un projet Flask. Cependant, je préfère pytest car il:
assert
, qui est beaucoup plus lisible et plus facile à mémoriser par rapport aux assertSomething
méthodes -- comme assertEquals
, assertTrue
et assertContains
-- dans unittest.J'aime organiser tous les cas de test dans un dossier "tests" séparé au même niveau que les fichiers d'application.
De plus, j'aime beaucoup différencier les tests unitaires des tests fonctionnels en les divisant en sous-dossiers séparés. Cette structure vous donne la possibilité d'exécuter facilement uniquement les tests unitaires (ou uniquement les tests fonctionnels, d'ailleurs).
Voici un exemple de la structure du répertoire "tests" :
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
Et, voici comment le dossier "tests" s'intègre dans un projet Flask typique avec des blueprints :
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
Le premier test que nous allons écrire est un test unitaire pour project/models.py , qui contient l' interface SQLAlchemy vers la base de données.
Ce test n'accède pas à la base de données sous-jacente ; il vérifie uniquement la classe d'interface utilisée par SQLAlchemy.
Puisque ce test est un test unitaire, il doit être implémenté dans tests/unit/test_models.py :
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Regardons de plus près ce test.
Après l'importation, nous commençons par une description de ce que fait le test :
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
Pourquoi inclure autant de commentaires pour une fonction de test ?
J'ai découvert que les tests sont l'un des aspects les plus difficiles à maintenir d'un projet. Souvent, le code (y compris le niveau de commentaires) des suites de tests est loin du niveau de qualité du code testé.
Une structure commune utilisée pour décrire ce que fait chaque fonction de test contribue à la maintenabilité en permettant à quelqu'un (un autre développeur, votre futur moi) de comprendre rapidement le but de chaque test.
Une pratique courante consiste à utiliser la structure GIVEN-WHEN-THEN :
- DONNÉ - quelles sont les conditions initiales du test ?
- QUAND - que se passe-t-il qui doit être testé ?
- ALORS - quelle est la réponse attendue ?
Pour en savoir plus, consultez l' article GivenWhenThen de Martin Fowler et le livre Python Testing with pytest de Brian Okken.
Ensuite, nous avons le test réel:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Après avoir créé un nouveau user
avec des arguments valides pour le constructeur, les propriétés du user
sont vérifiées pour s'assurer qu'il a été créé correctement.
Le deuxième test que nous allons écrire est un test fonctionnel pour project/recipes/routes.py , qui contient les fonctions d'affichage du recipes
blueprint.
Puisque ce test est un test fonctionnel, il doit être implémenté dans tests/functional/test_recipes.py :
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
Ce projet utilise le modèle Application Factory pour créer l'application Flask. Par conséquent, la create_app()
fonction doit d'abord être importée :
from project import create_app
La fonction de test, test_home_page()
, commence par la description DONNÉE QUAND-ALORS de ce que fait le test. Ensuite, une application Flask ( flask_app
) est créée :
flask_app = create_app('flask_test.cfg')
Afin de créer l'environnement approprié pour les tests, Flask fournit un assistant test_client . Cela crée une version de test de notre application Flask, que nous avons utilisée pour effectuer un appel GET à l'URL '/'. Nous vérifions ensuite que le code de statut renvoyé est OK (200) et que la réponse contenait les chaînes suivantes :
Ces vérifications correspondent à ce que nous attendons de l'utilisateur lorsqu'il accède à l'URL '/' :
Un exemple de test fonctionnel non nominal consisterait à utiliser une méthode HTTP invalide (POST) lors de l'accès à l'URL '/' :
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Ce test vérifie qu'une demande POST à l'URL '/' entraîne le renvoi d'un code d'erreur 405 (méthode non autorisée).
Prenez une seconde pour passer en revue les deux tests fonctionnels... voyez-vous du code en double entre ces deux fonctions de test ? Voyez-vous beaucoup de code pour initialiser l'état requis par les fonctions de test ? Nous pouvons utiliser des luminaires pour résoudre ces problèmes.
Les appareils initialisent les tests à un état connu afin d'exécuter les tests de manière prévisible et reproductible.
L'approche classique de l'écriture et de l'exécution des tests suit le type de framework de test xUnit , où chaque test s'exécute comme suit :
SetUp()
TearDown()
Les méthodes SetUp()
et TearDown()
s'exécutent toujours pour chaque test unitaire d'une suite de tests. Cette approche aboutit au même état initial pour chaque test d'une suite de tests, ce qui n'offre pas beaucoup de flexibilité.
L'approche de montage de test offre une flexibilité beaucoup plus grande que l'approche classique de configuration/démontage.
pytest-flask facilite le test des applications Flask en fournissant un ensemble d'appareils communs utilisés pour tester les applications Flask. Cette bibliothèque n'est pas utilisée dans ce didacticiel, car je souhaite montrer comment créer les appareils qui permettent de tester les applications Flask.
Tout d'abord, les appareils sont définis comme des fonctions (qui doivent avoir un nom descriptif pour leur objectif).
Deuxièmement, plusieurs appareils peuvent être exécutés pour définir l'état initial d'une fonction de test. En fait, les projecteurs peuvent même appeler d'autres projecteurs ! Ainsi, vous pouvez les composer ensemble pour créer l'état requis.
Enfin, les projecteurs peuvent être exécutés avec différentes portées :
function
- exécuter une fois par fonction de test (portée par défaut)class
- exécuter une fois par classe de testmodule
- exécuter une fois par module (par exemple, un fichier de test)session
- exécuter une fois par sessionPar exemple, si vous avez un appareil avec une portée de module, cet appareil s'exécutera une fois (et une seule fois) avant que les fonctions de test dans le module ne s'exécutent.
Les luminaires doivent être créés dans tests/conftest.py .
Pour aider à faciliter le test de la User
classe dans project/models.py , nous pouvons ajouter un appareil à tests/conftest.py qui est utilisé pour créer un User
objet à tester :
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
Le @pytest.fixture
décorateur spécifie que cette fonction est un appareil avec une module
portée de niveau. En d'autres termes, ce montage sera appelé un par module de test.
Cette fixture, new_user
, crée une instance d' User
utilisation d'arguments valides pour le constructeur. user
est ensuite passé à la fonction de test ( return user
).
Nous pouvons simplifier la test_new_user()
fonction de test précédente en utilisant le new_user
fixture dans tests/unit/test_models.py :
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
En utilisant un appareil, la fonction de test est réduite aux assert
instructions qui effectuent les vérifications par rapport à l' User
objet.
Pour aider à faciliter le test de toutes les fonctions de vue dans le projet Flask, un appareil peut être créé dans tests/conftest.py :
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Cet appareil crée le client de test à l'aide d'un gestionnaire de contexte :
with flask_app.test_client() as testing_client:
Ensuite, le contexte Application est poussé sur la pile pour être utilisé par les fonctions de test :
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Pour en savoir plus sur le contexte d'application dans Flask, consultez les articles de blog suivants :
- Notions de base : comprendre les contextes d'application et de demande dans Flask
- Avancé : analyse approfondie des contextes d'application et de demande de Flask
L' yield testing_client
instruction signifie que l'exécution est transmise aux fonctions de test.
Nous pouvons simplifier les tests fonctionnels précédents avec le test_client
fixture dans tests/functional/test_recipes.py :
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Avez-vous remarqué qu'une grande partie du code en double a disparu ? En utilisant le test_client
dispositif, chaque fonction de test est simplifiée jusqu'à l'appel HTTP (GET ou POST) et l'assertion qui vérifie la réponse.
Je trouve vraiment que l'utilisation d'appareils aide à concentrer la fonction de test sur la réalisation réelle des tests, car l'initialisation du test est gérée dans l'appareil.
Pour exécuter les tests, accédez au dossier de niveau supérieur du projet Flask et exécutez pytest via l'interpréteur Python :
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
Pourquoi exécuter pytest via l'interpréteur Python ?
Le principal avantage est que le répertoire actuel (par exemple, le dossier de niveau supérieur du projet Flask) est ajouté au chemin système. Cela évite tout problème avec pytest incapable de trouver le code source.
pytest recherchera de manière récursive dans la structure de votre projet pour trouver les fichiers Python qui commencent par test_*.py
, puis exécutera les fonctions qui commencent par test_
dans ces fichiers. Aucune configuration n'est nécessaire pour identifier où se trouvent les fichiers de test !
Pour voir plus de détails sur les tests qui ont été exécutés :
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
Si vous souhaitez uniquement exécuter un type de test spécifique :
python -m pytest tests/unit/
python -m pytest tests/functional/
Pour vraiment avoir une idée du moment où le test_client()
luminaire est exécuté, pytest peut fournir une structure d'appel des luminaires et des tests avec l' --setup-show
argument :
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
Le test_client
luminaire a une portée 'module', il est donc exécuté avant les deux tests _with_fixture dans tests/functional/test_recipes.py .
Si vous changez la portée de l' test_client
appareil en portée 'fonction' :
@pytest.fixture(scope='function')
Ensuite, l' test_client
appareil s'exécutera avant chacun des deux tests _with_fixture :
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Puisque nous voulons que l' test_client
appareil ne soit exécuté qu'une seule fois dans ce module, rétablissez la portée sur 'module'.
Lors du développement de tests, il est bon de comprendre quelle partie du code source est réellement testée. Ce concept est connu sous le nom de couverture de code .
Je dois être très clair sur le fait qu'avoir un ensemble de tests qui couvre 100% du code source n'est en aucun cas un indicateur que le code est correctement testé.
Cette métrique signifie qu'il y a beaucoup de tests et que beaucoup d'efforts ont été déployés pour développer les tests. La qualité des tests doit encore être vérifiée par l'inspection du code.
Cela dit, l'autre extrême, où il s'agit d'un ensemble minimal (ou aucun !) de tests, est bien pire !
Il existe deux excellents packages disponibles pour déterminer la couverture du code : cover.py et pytest-cov .
Je recommande d'utiliser pytest-cov en raison de son intégration transparente avec pytest. Il est construit au-dessus de cover.py, de Ned Batchelder, qui est la norme en matière de couverture de code pour Python.
L'exécution de pytest lors de la vérification de la couverture du code nécessite l' --cov
argument pour indiquer quel package Python ( project
dans la structure du projet Flask) doit vérifier la couverture :
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
Même lors de la vérification de la couverture du code, les arguments peuvent toujours être passés à pytest :
(venv)$ python -m pytest --setup-show --cov=project
Cet article a servi de guide pour tester les applications Flask, en se concentrant sur :
Source : https://testdrive.io
1660261320
Este artículo sirve como guía para probar aplicaciones Flask con pytest.
Primero veremos por qué las pruebas son importantes para crear software mantenible y en qué debe concentrarse al realizar las pruebas. Luego, detallaremos cómo:
Al final de este artículo, podrá:
En general, las pruebas ayudan a garantizar que su aplicación funcione como se espera para sus usuarios finales.
Los proyectos de software con una alta cobertura de pruebas nunca son perfectos, pero es un buen indicador inicial de la calidad del software. Además, el código comprobable generalmente es un signo de una buena arquitectura de software, razón por la cual los desarrolladores avanzados tienen en cuenta las pruebas durante todo el ciclo de vida del desarrollo.
Las pruebas se pueden considerar en tres niveles:
Las pruebas unitarias prueban la funcionalidad de una unidad de código individual aislada de sus dependencias. Son la primera línea de defensa contra errores e inconsistencias en su base de código. Prueban de adentro hacia afuera, desde el punto de vista del programador.
Las pruebas funcionales prueban múltiples componentes de un producto de software para asegurarse de que los componentes funcionen juntos correctamente. Por lo general, estas pruebas se centran en la funcionalidad que utilizará el usuario. Prueban de afuera hacia adentro, desde el punto de vista del usuario final.
Tanto las pruebas unitarias como las funcionales son partes fundamentales del proceso de desarrollo dirigido por pruebas (TDD) .
Las pruebas mejoran la capacidad de mantenimiento de su código.
La capacidad de mantenimiento se refiere a realizar correcciones de errores o mejoras a su código o a que otro desarrollador necesite actualizar su código en algún momento en el futuro.
Las pruebas deben combinarse con un proceso de integración continua (CI) para garantizar que sus pruebas se ejecuten constantemente, idealmente en cada confirmación en su repositorio. Un conjunto sólido de pruebas puede ser fundamental para detectar defectos de forma rápida y temprana en el proceso de desarrollo antes de que los usuarios finales los encuentren en producción.
¿Qué deberías probar?
Una vez más, las pruebas unitarias deben centrarse en probar pequeñas unidades de código de forma aislada.
Por ejemplo, en una aplicación Flask, puede usar pruebas unitarias para probar:
Mientras tanto, las pruebas funcionales deben centrarse en cómo operan las funciones de vista.
Por ejemplo:
Concéntrese en escenarios de prueba con los que el usuario final interactuará. ¡La experiencia que tienen los usuarios de tu producto es primordial!
pytest es un marco de prueba para Python que se utiliza para escribir, organizar y ejecutar casos de prueba. Después de configurar su estructura de prueba básica, pytest hace que sea realmente fácil escribir pruebas y brinda mucha flexibilidad para ejecutar las pruebas. pytest satisface los aspectos clave de un buen entorno de prueba:
pytest es increíble! Recomiendo usarlo para probar cualquier aplicación o script escrito en Python.
Si está interesado en aprender realmente todos los diferentes aspectos de pytest, le recomiendo el libro Python Testing with pytest de Brian Okken.
Python tiene un marco de prueba incorporado llamado unittest , que también es una excelente opción para realizar pruebas. El módulo unittest está inspirado en el marco de prueba xUnit .
Proporciona lo siguiente:
assert
declaraciones para realizar comprobacionesLas principales diferencias entre pytest y unittest:
Rasgo | pytest | prueba de unidad |
---|---|---|
Instalación | Biblioteca de terceros | Parte de la biblioteca estándar central |
Montaje y desmontaje de prueba | accesorios | setUp() y tearDown() metodos |
Formato de aserción | Afirmación incorporada | assert* métodos de estilo |
Estructura | Funcional | Orientado a objetos |
Cualquier marco es bueno para probar un proyecto Flask. Sin embargo, prefiero pytest ya que:
assert
declaración simple, que es mucho más legible y fácil de recordar en comparación con los assertSomething
métodos, como assertEquals
, assertTrue
y assertContains
, en unittest.Me gusta organizar todos los casos de prueba en una carpeta de "pruebas" separada al mismo nivel que los archivos de la aplicación.
Además, me gusta mucho diferenciar entre pruebas unitarias y funcionales dividiéndolas en subcarpetas separadas. Esta estructura le brinda la flexibilidad para ejecutar fácilmente solo las pruebas unitarias (o solo las pruebas funcionales, para el caso).
Aquí hay un ejemplo de la estructura del directorio "pruebas":
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
Y así es como la carpeta de "pruebas" encaja en un proyecto típico de Flask con planos :
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
La primera prueba que vamos a escribir es una prueba unitaria para project/models.py , que contiene la interfaz SQLAlchemy para la base de datos.
Esta prueba no accede a la base de datos subyacente; solo verifica la clase de interfaz utilizada por SQLAlchemy.
Dado que esta prueba es una prueba unitaria, debe implementarse en tests/unit/test_models.py :
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Echemos un vistazo más de cerca a esta prueba.
Después de la importación, comenzamos con una descripción de lo que hace la prueba:
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
¿Por qué incluir tantos comentarios para una función de prueba?
Descubrí que las pruebas son uno de los aspectos más difíciles de mantener de un proyecto. A menudo, el código (incluido el nivel de comentarios) para los conjuntos de pruebas no se acerca al nivel de calidad del código que se está probando.
Una estructura común utilizada para describir lo que hace cada función de prueba ayuda con la mantenibilidad al facilitar que alguien (otro desarrollador, su yo futuro) comprenda rápidamente el propósito de cada prueba.
Una práctica común es usar la estructura DADO-CUANDO-ENTONCES:
- DADO - ¿Cuáles son las condiciones iniciales para la prueba?
- CUÁNDO - ¿Qué está ocurriendo que necesita ser probado?
- ENTONCES - ¿cuál es la respuesta esperada?
Para obtener más información, consulte el artículo GivenWhenThen de Martin Fowler y el libro Python Testing with pytest de Brian Okken.
A continuación, tenemos la prueba real:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Después de crear una nueva user
con argumentos válidos para el constructor, user
se verifican las propiedades de para asegurarse de que se creó correctamente.
La segunda prueba que vamos a escribir es una prueba funcional para project/recipes/routes.py , que contiene las funciones de vista para el recipes
modelo.
Dado que esta prueba es una prueba funcional, debe implementarse en tests/funcional/test_recipes.py :
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
Este proyecto utiliza el Patrón de fábrica de aplicaciones para crear la aplicación Flask. Por lo tanto, create_app()
primero se debe importar la función:
from project import create_app
La función de prueba, test_home_page()
, comienza con la descripción DADO-CUANDO-ENTONCES de lo que hace la prueba. A continuación, se crea una aplicación Flask ( flask_app
):
flask_app = create_app('flask_test.cfg')
Para crear el entorno adecuado para las pruebas, Flask proporciona un ayudante test_client . Esto crea una versión de prueba de nuestra aplicación Flask, que usamos para hacer una llamada GET a la URL '/'. Luego verificamos que el código de estado devuelto sea correcto (200) y que la respuesta contenga las siguientes cadenas:
Estas comprobaciones coinciden con lo que esperamos que vea el usuario cuando navegue a la URL '/':
Un ejemplo de una prueba funcional no nominal sería utilizar un método HTTP no válido (POST) al acceder a la URL '/':
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Esta prueba verifica que una solicitud POST a la URL '/' da como resultado un código de error de 405 (Método no permitido) que se devuelve.
Tómese un segundo para revisar las dos pruebas funcionales... ¿ve algún código duplicado entre estas dos funciones de prueba? ¿Ves mucho código para inicializar el estado que necesitan las funciones de prueba? Podemos usar accesorios para abordar estos problemas.
Los accesorios inicializan las pruebas en un estado conocido para ejecutar las pruebas de manera predecible y repetible.
El enfoque clásico para escribir y ejecutar pruebas sigue el tipo de marco de prueba xUnit , donde cada prueba se ejecuta de la siguiente manera:
SetUp()
TearDown()
Los métodos SetUp()
y TearDown()
siempre se ejecutan para cada prueba unitaria dentro de un conjunto de pruebas. Este enfoque da como resultado el mismo estado inicial para cada prueba dentro de un conjunto de pruebas, lo que no proporciona mucha flexibilidad.
El enfoque de dispositivo de prueba proporciona una flexibilidad mucho mayor que el enfoque clásico de instalación/desmontaje.
pytest-flask facilita la prueba de las aplicaciones de Flask al proporcionar un conjunto de accesorios comunes que se utilizan para probar las aplicaciones de Flask. Esta biblioteca no se usa en este tutorial, ya que quiero mostrar cómo crear los accesorios que ayudan a probar las aplicaciones de Flask.
Primero, los aparatos se definen como funciones (que deben tener nombres descriptivos para su propósito).
En segundo lugar, se pueden ejecutar varios dispositivos para establecer el estado inicial de una función de prueba. De hecho, ¡las luminarias pueden incluso llamar a otras luminarias! Por lo tanto, puede componerlos juntos para crear el estado requerido.
Finalmente, los accesorios se pueden ejecutar con diferentes alcances:
function
- ejecutar una vez por función de prueba (alcance predeterminado)class
- ejecutar una vez por clase de pruebamodule
- ejecutar una vez por módulo (por ejemplo, un archivo de prueba)session
- ejecutar una vez por sesiónPor ejemplo, si tiene un dispositivo con alcance de módulo, ese dispositivo se ejecutará una vez (y solo una vez) antes de que se ejecuten las funciones de prueba en el módulo.
Los accesorios deben crearse en tests/conftest.py .
Para ayudar a facilitar la prueba de la User
clase en project/models.py , podemos agregar un accesorio a tests/conftest.py que se usa para crear un User
objeto para probar:
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
El @pytest.fixture
decorador especifica que esta función es un accesorio con module
alcance de nivel. En otras palabras, este dispositivo se llamará uno por módulo de prueba.
Este accesorio, new_user
crea una instancia de User
uso de argumentos válidos para el constructor. user
luego se pasa a la función de prueba ( return user
).
Podemos simplificar la test_new_user()
función de prueba de antes usando el new_user
accesorio en tests/unit/test_models.py :
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
Mediante el uso de un accesorio, la función de prueba se reduce a las assert
declaraciones que realizan las comprobaciones contra el User
objeto.
Para ayudar a facilitar la prueba de todas las funciones de vista en el proyecto Flask, se puede crear un accesorio en tests/conftest.py :
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Este accesorio crea el cliente de prueba utilizando un administrador de contexto:
with flask_app.test_client() as testing_client:
A continuación, el contexto de la aplicación se coloca en la pila para que lo utilicen las funciones de prueba:
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Para obtener más información sobre el contexto de la aplicación en Flask, consulte las siguientes publicaciones de blog:
La yield testing_client
declaración significa que la ejecución se está pasando a las funciones de prueba.
Podemos simplificar las pruebas funcionales de antes con el test_client
accesorio en tests/funcional/test_recipes.py :
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
¿Notó que gran parte del código duplicado se ha ido? Al utilizar el test_client
accesorio, cada función de prueba se simplifica hasta la llamada HTTP (GET o POST) y la afirmación que verifica la respuesta.
Realmente encuentro que el uso de accesorios ayuda a enfocar la función de prueba en hacer la prueba, ya que la inicialización de la prueba se maneja en el accesorio.
Para ejecutar las pruebas, navegue a la carpeta de nivel superior del proyecto Flask y ejecute pytest a través del intérprete de Python:
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
¿Por qué ejecutar pytest a través del intérprete de Python?
La principal ventaja es que el directorio actual (por ejemplo, la carpeta de nivel superior del proyecto Flask) se agrega a la ruta del sistema. Esto evita cualquier problema con pytest que no pueda encontrar el código fuente.
pytest buscará recursivamente a través de la estructura de su proyecto para encontrar los archivos de Python que comienzan con test_*.py
y luego ejecutará las funciones que comienzan con test_
esos archivos. ¡No se necesita configuración para identificar dónde se encuentran los archivos de prueba!
Para ver más detalles sobre las pruebas que se ejecutaron:
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
Si solo desea ejecutar un tipo específico de prueba:
python -m pytest tests/unit/
python -m pytest tests/functional/
Para tener una idea real de cuándo test_client()
se ejecuta el accesorio, pytest puede proporcionar una estructura de llamada de los accesorios y pruebas con el --setup-show
argumento:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
El test_client
accesorio tiene un alcance de 'módulo', por lo que se ejecuta antes de las dos pruebas _with_fixture en tests/funcional/test_recipes.py .
Si cambia el alcance del test_client
accesorio a un alcance de 'función':
@pytest.fixture(scope='function')
Luego, el test_client
accesorio se ejecutará antes de cada una de las dos pruebas _with_fixture :
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Dado que queremos que el test_client
dispositivo solo se ejecute una vez en este módulo, vuelva a colocar el alcance en 'módulo'.
Al desarrollar pruebas, es bueno tener una idea de cuánto del código fuente se prueba realmente. Este concepto se conoce como cobertura de código .
Debo dejar muy claro que tener un conjunto de pruebas que cubre el 100% del código fuente no es un indicador de que el código se haya probado correctamente.
Esta métrica significa que hay muchas pruebas y se ha puesto mucho esfuerzo en desarrollar las pruebas. La calidad de las pruebas aún debe verificarse mediante la inspección del código.
Dicho esto, el otro extremo, donde se trata de un conjunto mínimo (¡o ninguno!) de pruebas, ¡es mucho peor!
Hay dos excelentes paquetes disponibles para determinar la cobertura del código: covery.py y pytest-cov .
Recomiendo usar pytest-cov en función de su perfecta integración con pytest. Está construido sobre la cobertura.py, de Ned Batchelder, que es el estándar en la cobertura de código para Python.
Ejecutar pytest al verificar la cobertura del código requiere el --cov
argumento para indicar qué paquete de Python ( project
en la estructura del proyecto Flask) para verificar la cobertura de:
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
Incluso al verificar la cobertura del código, los argumentos aún se pueden pasar a pytest:
(venv)$ python -m pytest --setup-show --cov=project
Este artículo sirvió como guía para probar las aplicaciones de Flask, centrándose en:
Fuente: https://testdriven.io
1660254060
Este artigo serve como um guia para testar aplicativos Flask com pytest.
Veremos primeiro por que o teste é importante para a criação de software sustentável e no que você deve se concentrar ao testar. Em seguida, detalharemos como:
Ao final deste artigo, você será capaz de:
Em geral, o teste ajuda a garantir que seu aplicativo funcione conforme o esperado para seus usuários finais.
Projetos de software com alta cobertura de teste nunca são perfeitos, mas é um bom indicador inicial da qualidade do software. Além disso, o código testável geralmente é um sinal de uma boa arquitetura de software, e é por isso que os desenvolvedores avançados levam os testes em consideração durante todo o ciclo de vida do desenvolvimento.
Os testes podem ser considerados em três níveis:
Os testes de unidade testam a funcionalidade de uma unidade individual de código isolada de suas dependências. Eles são a primeira linha de defesa contra erros e inconsistências em sua base de código. Eles testam de dentro para fora, do ponto de vista do programador.
Os testes funcionais testam vários componentes de um produto de software para garantir que os componentes estejam funcionando corretamente em conjunto. Normalmente, esses testes se concentram na funcionalidade que o usuário utilizará. Eles testam de fora para dentro, do ponto de vista do usuário final.
Tanto os testes unitários quanto os funcionais são partes fundamentais do processo de Desenvolvimento Orientado a Testes (TDD) .
O teste melhora a capacidade de manutenção do seu código.
Manutenibilidade refere-se a fazer correções de bugs ou aprimoramentos em seu código ou a outro desenvolvedor que precise atualizar seu código em algum momento no futuro.
Os testes devem ser combinados com um processo de Integração Contínua (CI) para garantir que seus testes sejam executados constantemente, de preferência em cada confirmação para seu repositório. Um conjunto sólido de testes pode ser fundamental para detectar defeitos rapidamente e no início do processo de desenvolvimento, antes que seus usuários finais os encontrem na produção.
O que você deve testar?
Novamente, os testes de unidade devem se concentrar em testar pequenas unidades de código isoladamente.
Por exemplo, em um aplicativo Flask, você pode usar testes de unidade para testar:
Os testes funcionais, por sua vez, devem se concentrar em como as funções de visualização operam.
Por exemplo:
Concentre-se nos cenários de teste com os quais o usuário final irá interagir. A experiência que os usuários do seu produto têm é primordial!
pytest é um framework de teste para Python usado para escrever, organizar e executar casos de teste. Depois de configurar sua estrutura básica de teste, o pytest facilita muito a escrita de testes e oferece muita flexibilidade para executar os testes. pytest satisfaz os principais aspectos de um bom ambiente de teste:
pytest é incrível! Eu recomendo usá-lo para testar qualquer aplicativo ou script escrito em Python.
Se você estiver interessado em realmente aprender todos os diferentes aspectos do pytest, eu recomendo o livro Python Testing with pytest de Brian Okken.
O Python possui uma estrutura de teste integrada chamada unittest , que também é uma ótima opção para testes. O módulo unittest é inspirado na estrutura de teste xUnit .
Ele fornece o seguinte:
assert
instruções para realizar verificaçõesAs principais diferenças entre pytest e unittest:
Característica | pytest | teste de unidade |
---|---|---|
Instalação | Biblioteca de terceiros | Parte da biblioteca padrão principal |
Teste de configuração e desmontagem | luminárias | setUp() e tearDown() métodos |
Formato de declaração | Afirmação incorporada | assert* métodos de estilo |
Estrutura | Funcional | Orientado a Objeto |
Qualquer um dos frameworks é bom para testar um projeto Flask. No entanto, prefiro pytest, pois:
assert
instrução simples, que é muito mais legível e fácil de lembrar em comparação com os assertSomething
métodos -- como assertEquals
, assertTrue
e assertContains
-- em unittest.Eu gosto de organizar todos os casos de teste em uma pasta "testes" separada no mesmo nível dos arquivos do aplicativo.
Além disso, gosto muito de diferenciar entre testes unitários e funcionais, dividindo-os em subpastas separadas. Essa estrutura oferece a flexibilidade de executar facilmente apenas os testes de unidade (ou apenas os testes funcionais).
Aqui está um exemplo da estrutura do diretório "tests":
└── tests
├── conftest.py
├── functional
│ ├── __init__.py
│ ├── test_stocks.py
│ └── test_users.py
└── unit
├── __init__.py
└── test_models.py
E aqui está como a pasta "tests" se encaixa em um projeto típico do Flask com blueprints :
├── app.py
├── project
│ ├── __init__.py
│ ├── models.py
│ └── ...blueprint folders...
├── requirements.txt
├── tests
│ ├── conftest.py
│ ├── functional
│ │ ├── __init__.py
│ │ ├── test_stocks.py
│ │ └── test_users.py
│ └── unit
│ ├── __init__.py
│ └── test_models.py
└── venv
O primeiro teste que vamos escrever é um teste de unidade para project/models.py , que contém a interface SQLAlchemy para o banco de dados.
Este teste não acessa o banco de dados subjacente; ele verifica apenas a classe de interface usada pelo SQLAlchemy.
Como este teste é um teste de unidade, ele deve ser implementado em tests/unit/test_models.py :
from project.models import User
def test_new_user():
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Vamos dar uma olhada neste teste.
Após a importação, começamos com uma descrição do que o teste faz:
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, and role fields are defined correctly
"""
Por que incluir tantos comentários para uma função de teste?
Descobri que os testes são um dos aspectos mais difíceis de manter em um projeto. Muitas vezes, o código (incluindo o nível de comentários) para suítes de teste não chega nem perto do nível de qualidade do código que está sendo testado.
Uma estrutura comum usada para descrever o que cada função de teste faz ajuda na manutenção, tornando mais fácil para alguém (outro desenvolvedor, seu futuro eu) entender rapidamente o propósito de cada teste.
Uma prática comum é usar a estrutura GIVEN-WHEN-THEN:
- DADO - quais são as condições iniciais para o teste?
- QUANDO - o que está ocorrendo que precisa ser testado?
- ENTÃO - qual é a resposta esperada?
Para saber mais, revise o artigo GivenWhenThen de Martin Fowler e o livro Python Testing with pytest de Brian Okken.
Em seguida, temos o teste real:
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
assert user.email == 'patkennedy79@gmail.com'
assert user.hashed_password != 'FlaskIsAwesome'
assert user.role == 'user'
Depois de criar um novo user
com argumentos válidos para o construtor, as propriedades do user
são verificadas para garantir que ele foi criado corretamente.
O segundo teste que vamos escrever é um teste funcional para project/recipes/routes.py , que contém as funções de visualização para o recipes
blueprint.
Como este teste é um teste funcional, ele deve ser implementado em tests/functional/test_recipes.py :
from project import create_app
def test_home_page():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
Este projeto usa o Application Factory Pattern para criar o aplicativo Flask. Portanto, a create_app()
função precisa primeiro ser importada:
from project import create_app
A função de teste, test_home_page()
, começa com a descrição GIVEN-WHEN-THEN do que o teste faz. Em seguida, um aplicativo Flask ( flask_app
) é criado:
flask_app = create_app('flask_test.cfg')
Para criar o ambiente adequado para testes, o Flask fornece um auxiliar test_client . Isso cria uma versão de teste do nosso aplicativo Flask, que usamos para fazer uma chamada GET para a URL '/'. Em seguida, verificamos se o código de status retornado está OK (200) e se a resposta continha as seguintes strings:
Essas verificações correspondem ao que esperamos que o usuário veja quando navegamos para a URL '/':
Um exemplo de teste funcional fora do nominal seria utilizar um método HTTP inválido (POST) ao acessar a URL '/':
def test_home_page_post():
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as test_client:
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Este teste verifica se uma solicitação POST para a URL '/' resulta no retorno de um código de erro 405 (Método não permitido).
Reserve um segundo para revisar os dois testes funcionais... você vê algum código duplicado entre essas duas funções de teste? Você vê muito código para inicializar o estado necessário para as funções de teste? Podemos usar acessórios para resolver esses problemas.
As luminárias inicializam os testes em um estado conhecido para executar testes de maneira previsível e repetível.
A abordagem clássica para escrever e executar testes segue o tipo de estrutura de teste xUnit , onde cada teste é executado da seguinte forma:
SetUp()
TearDown()
Os métodos SetUp()
e TearDown()
sempre são executados para cada teste de unidade em um conjunto de testes. Essa abordagem resulta no mesmo estado inicial para cada teste em um conjunto de testes, o que não oferece muita flexibilidade.
A abordagem de fixação de teste oferece uma flexibilidade muito maior do que a abordagem clássica de configuração/desmontagem.
pytest-flask facilita o teste de aplicativos Flask fornecendo um conjunto de acessórios comuns usados para testar aplicativos Flask. Esta biblioteca não é usada neste tutorial, pois quero mostrar como criar os fixtures que ajudam a suportar o teste de aplicativos Flask.
Primeiro, os fixtures são definidos como funções (que devem ter nomes descritivos para seu propósito).
Segundo, vários equipamentos podem ser executados para definir o estado inicial de uma função de teste. Na verdade, os fixtures podem até chamar outros fixtures! Assim, você pode compô-los juntos para criar o estado necessário.
Finalmente, os fixtures podem ser executados com diferentes escopos:
function
- execute uma vez por função de teste (escopo padrão)class
- executado uma vez por classe de testemodule
- execute uma vez por módulo (por exemplo, um arquivo de teste)session
- executar uma vez por sessãoPor exemplo, se você tiver um fixture com escopo de módulo, esse fixture será executado uma vez (e apenas uma vez) antes que as funções de teste no módulo sejam executadas.
Fixtures devem ser criadas em tests/conftest.py .
Para ajudar a facilitar o teste da User
classe em project/models.py , podemos adicionar um fixture a tests/conftest.py que é usado para criar um User
objeto para teste:
from project.models import User
@pytest.fixture(scope='module')
def new_user():
user = User('patkennedy79@gmail.com', 'FlaskIsAwesome')
return user
O @pytest.fixture
decorador especifica que esta função é um fixture com module
escopo -level. Em outras palavras, este acessório será chamado um por módulo de teste.
Este fixture, new_user
, cria uma instância de User
usar argumentos válidos para o construtor. user
é então passado para a função de teste ( return user
).
Podemos simplificar a test_new_user()
função test de antes usando o new_user
fixture em tests/unit/test_models.py :
def test_new_user_with_fixture(new_user):
"""
GIVEN a User model
WHEN a new User is created
THEN check the email, hashed_password, authenticated, and role fields are defined correctly
"""
assert new_user.email == 'patkennedy79@gmail.com'
assert new_user.hashed_password != 'FlaskIsAwesome'
assert new_user.role == 'user'
Ao usar um fixture, a função de teste é reduzida às assert
instruções que executam as verificações em relação ao User
objeto.
Para ajudar a facilitar o teste de todas as funções de visualização no projeto Flask, um fixture pode ser criado em tests/conftest.py :
from project import create_app
@pytest.fixture(scope='module')
def test_client():
flask_app = create_app('flask_test.cfg')
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Este acessório cria o cliente de teste usando um gerenciador de contexto:
with flask_app.test_client() as testing_client:
Em seguida, o contexto do aplicativo é enviado para a pilha para uso pelas funções de teste:
with flask_app.app_context():
yield testing_client # this is where the testing happens!
Para saber mais sobre o contexto do aplicativo no Flask, consulte as seguintes postagens do blog:
A yield testing_client
instrução significa que a execução está sendo passada para as funções de teste.
Podemos simplificar os testes funcionais anteriores com o test_client
fixture em tests/functional/test_recipes.py :
def test_home_page_with_fixture(test_client):
"""
GIVEN a Flask application configured for testing
WHEN the '/' page is requested (GET)
THEN check that the response is valid
"""
response = test_client.get('/')
assert response.status_code == 200
assert b"Welcome to the" in response.data
assert b"Flask User Management Example!" in response.data
assert b"Need an account?" in response.data
assert b"Existing user?" in response.data
def test_home_page_post_with_fixture(test_client):
"""
GIVEN a Flask application
WHEN the '/' page is is posted to (POST)
THEN check that a '405' status code is returned
"""
response = test_client.post('/')
assert response.status_code == 405
assert b"Flask User Management Example!" not in response.data
Você notou que grande parte do código duplicado desapareceu? Ao utilizar o test_client
fixture, cada função de teste é simplificada para a chamada HTTP (GET ou POST) e a declaração que verifica a resposta.
Eu realmente acho que usar fixtures ajuda a focar a função de teste em realmente fazer o teste, já que a inicialização do teste é tratada no fixture.
Para executar os testes, navegue até a pasta de nível superior do projeto Flask e execute pytest por meio do interpretador Python:
(venv)$ python -m pytest
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
============================== 13 passed in 0.46s ==============================
Por que executar pytest através do interpretador Python?
A principal vantagem é que o diretório atual (por exemplo, a pasta de nível superior do projeto Flask) é adicionado ao caminho do sistema. Isso evita problemas com o pytest não conseguir encontrar o código-fonte.
O pytest pesquisará recursivamente na estrutura do seu projeto para encontrar os arquivos Python que começam test_*.py
e, em seguida, executar as funções que começam test_
nesses arquivos. Não há configuração necessária para identificar onde os arquivos de teste estão localizados!
Para ver mais detalhes sobre os testes que foram executados:
(venv)$ python -m pytest -v
============================= test session starts ==============================
tests/functional/test_recipes.py::test_home_page PASSED [ 7%]
tests/functional/test_recipes.py::test_home_page_post PASSED [ 15%]
tests/functional/test_recipes.py::test_home_page_with_fixture PASSED [ 23%]
tests/functional/test_recipes.py::test_home_page_post_with_fixture PASSED [ 30%]
tests/functional/test_users.py::test_login_page PASSED [ 38%]
tests/functional/test_users.py::test_valid_login_logout PASSED [ 46%]
tests/functional/test_users.py::test_invalid_login PASSED [ 53%]
tests/functional/test_users.py::test_valid_registration PASSED [ 61%]
tests/functional/test_users.py::test_invalid_registration PASSED [ 69%]
tests/unit/test_models.py::test_new_user PASSED [ 76%]
tests/unit/test_models.py::test_new_user_with_fixture PASSED [ 84%]
tests/unit/test_models.py::test_setting_password PASSED [ 92%]
tests/unit/test_models.py::test_user_id PASSED [100%]
============================== 13 passed in 0.62s ==============================
Se você deseja executar apenas um tipo específico de teste:
python -m pytest tests/unit/
python -m pytest tests/functional/
Para realmente ter uma noção de quando o test_client()
fixture é executado, pytest pode fornecer uma estrutura de chamada dos fixtures e testes com o --setup-show
argumento:
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
====================================== test session starts =====================================
tests/functional/test_recipes.py
...
SETUP M test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN M test_client
======================================= 4 passed in 0.18s ======================================
O test_client
fixture tem um escopo 'module', então ele é executado antes dos dois testes _with_fixture em tests/functional/test_recipes.py .
Se você alterar o escopo do test_client
fixture para um escopo de 'função':
@pytest.fixture(scope='function')
Então o test_client
fixture será executado antes de cada um dos dois testes _with_fixture :
(venv)$ python -m pytest --setup-show tests/functional/test_recipes.py
======================================= test session starts ======================================
tests/functional/test_recipes.py
...
SETUP F test_client
functional/test_recipes.py::test_home_page_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
SETUP F test_client
functional/test_recipes.py::test_home_page_post_with_fixture (fixtures used: test_client).
TEARDOWN F test_client
======================================== 4 passed in 0.21s =======================================
Como queremos que o test_client
fixture seja executado apenas uma vez neste módulo, reverta o escopo de volta para 'module'.
Ao desenvolver testes, é bom entender quanto do código-fonte é realmente testado. Este conceito é conhecido como cobertura de código .
Preciso deixar bem claro que ter um conjunto de testes que cobre 100% do código-fonte não é de forma alguma um indicador de que o código foi testado adequadamente.
Essa métrica significa que há muitos testes e muito esforço foi feito no desenvolvimento dos testes. A qualidade dos testes ainda precisa ser verificada por inspeção de código.
Dito isto, o outro extremo, onde este é um conjunto mínimo (ou nenhum!) de testes, é muito pior!
Existem dois pacotes excelentes disponíveis para determinar a cobertura de código: coverage.py e pytest-cov .
Eu recomendo usar pytest-cov com base em sua integração perfeita com pytest. Ele é construído em cima de coverage.py, de Ned Batchelder, que é o padrão em cobertura de código para Python.
Executar pytest ao verificar a cobertura de código requer o --cov
argumento para indicar qual pacote Python ( project
na estrutura do projeto Flask) para verificar a cobertura de:
(venv)$ python -m pytest --cov=project
============================= test session starts ==============================
tests/functional/test_recipes.py .... [ 30%]
tests/functional/test_users.py ..... [ 69%]
tests/unit/test_models.py .... [100%]
---------- coverage: platform darwin, python 3.8.5-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
project/__init__.py 27 0 100%
project/models.py 32 2 94%
project/recipes/__init__.py 3 0 100%
project/recipes/routes.py 5 0 100%
project/users/__init__.py 3 0 100%
project/users/forms.py 18 1 94%
project/users/routes.py 50 4 92%
-------------------------------------------------
TOTAL 138 7 95%
============================== 13 passed in 0.86s ==============================
Mesmo ao verificar a cobertura do código, os argumentos ainda podem ser passados para o pytest:
(venv)$ python -m pytest --setup-show --cov=project
Este artigo serviu como um guia para testar aplicativos Flask, com foco em:
Fonte: https://testdrive.io