Gordon  Murray

Gordon Murray

1669014960

How to Behavior-Driven Development with Django and Aloe

Imagine you are a Django developer building a social network for a lean startup. The CEO is pressuring your team for an MVP. The engineers have agreed to build the product using behavior-driven development (BDD) to deliver fast and efficient results. The product owner gives you the first feature request, and following the practice of all good programming methodologies, you begin the BDD process by writing a test. Next you code a bit of functionality to make your test pass and you consider your design. The last step requires you to analyze the feature itself. Does it belong in your app?

We can't answer that question for you, but we can teach you when to ask it. In the following tutorial, we walk you through the BDD development cycle by programming an example feature using Django and Aloe. Follow along to learn how you can use the BDD process to help catch and fix poor designs quickly while programming a stable app.

Objectives

By the time you complete this tutorial, you should be able to:

  1. Describe and practice behavior-driven development (BDD)
  2. Explain how to implement BDD in a new project
  3. Test your Django applications using Aloe

Project Setup

Want to build this project as you read the post?

Start by:

  1. Adding a project directory.
  2. Creating and activating a virtual environment.

Then, install the following dependencies and start a new Django project:

(venv)$ pip install \
        django==3.2.4 \
        djangorestframework==3.12.4 \
        aloe_django==0.2.0
(venv)$ django-admin startproject example_bdd .
(venv)$ python manage.py startapp example

You may need to manually install setuptools-scm (pip install setuptools-scm) if you get this error when trying to install aloe_django:

distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('setuptools_scm')

Update the INSTALLED_APPS list in settings.py:

INSTALLED_APPS = [

    ...

    'aloe_django',
    'rest_framework',
    'example',
]

Just looking for the code? Grab it from the repo.

Brief Overview of BDD

Behavior-driven development is a way of testing your code that challenges you to constantly revisit your design. When you write a test, you answer the question Does my code do what I expect it to do? through assertions. Failing tests expose the mistakes in your code. With BDD, you analyze a feature: Is the user experience what I expect it to be? There's nothing as concrete as a failing test to expose a bad feature, but the consequences of delivering a bad experience are tangible.

Execute BDD as part of your test development cycle. Draw the functional boundaries of a feature with tests. Create code that colors in the details. Step back and consider your design. And then do it all over again until the picture is complete.

Review the following post for a more in-depth explanation of BDD.

Your First Feature Request

"Users should be able to log into the app and see a list of their friends."

That's how your product manager starts the conversation about the app's first feature. It's not much but you can use it to write a test. She's actually requesting two pieces of functionality:

  1. user authentication
  2. the ability to form relationships between users.

Here's a rule of thumb: treat a conjunction like a beacon, warning you against trying to test too many things at once. If you ever see an "and" or an "or" in a test statement, you should break that test into smaller ones.

With that truism in mind take the first half of the feature request and write a test scenario: a user can log into the app. In order to support user authentication, your app must store user credentials and give users a way to access their data with those credentials. Here's how you translate those criteria into an Aloe .feature file.

example/features/friendships.feature

Feature: Friendships

  Scenario: A user can log into the app

    Given I empty the "User" table

    And I create the following users:
      | id | email             | username | password  |
      | 1  | annie@example.com | Annie    | pAssw0rd! |

    When I log in with username "Annie" and password "pAssw0rd!"

    Then I am logged in

An Aloe test case is called a feature. You program features using two files: a Feature file and a Steps file.

  1. The Feature file consists of statements written in plain English that describe how to configure, execute, and confirm the results of a test. Use the Feature keyword to label the feature and the Scenario keyword to define a user story that you are planning to test. In the example above, the scenario defines a series of steps that explain how to populate the User database table, log a user into the app, and validate the login. All step statements must begin with one of four keywords: Given, When, Then, or And.
  2. The Steps file contains Python functions that are mapped to the Feature file steps using regular expressions.

You may need to add an __init__.py file to the "features" directory for the interpreter to load the friendships_steps.py file correctly.

Run python manage.py harvest and see the following output.

nosetests --verbosity=1
Creating test database for alias 'default'...
E
======================================================================
ERROR: A user can log into the app (example.features.friendships: Friendships)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "django-aloe-bdd/venv/lib/python3.9/site-packages/aloe/registry.py", line 151, in wrapped
    return function(*args, **kwargs)
  File "django-aloe-bdd/example/features/friendships.feature", line 5, in A user can log into the app
    Given I empty the "User" table
  File "django-aloe-bdd/venv/lib/python3.9/site-packages/aloe/registry.py", line 151, in wrapped
    return function(*args, **kwargs)
  File "django-aloe-bdd/venv/lib/python3.9/site-packages/aloe/exceptions.py", line 44, in undefined_step
    raise NoDefinitionFound(step)
aloe.exceptions.NoDefinitionFound: The step r"Given I empty the "User" table" is not defined
-------------------- >> begin captured logging << --------------------
asyncio: DEBUG: Using selector: KqueueSelector
--------------------- >> end captured logging << ---------------------

----------------------------------------------------------------------
Ran 1 test in 0.506s

FAILED (errors=1)
Destroying test database for alias 'default'...

The test fails because you haven't mapped the step statements to Python functions. Do so in the following file.

example/features/friendships_steps.py

from aloe import before, step, world
from aloe.tools import guess_types
from aloe_django.steps.models import get_model
from django.contrib.auth.models import User
from rest_framework.test import APIClient


@before.each_feature
def before_each_feature(feature):
    world.client = APIClient()


@step('I empty the "([^"]+)" table')
def step_empty_table(self, model_name):
    get_model(model_name).objects.all().delete()


@step('I create the following users:')
def step_create_users(self):
    for user in guess_types(self.hashes):
        User.objects.create_user(**user)


@step('I log in with username "([^"]+)" and password "([^"]+)"')
def step_log_in(self, username, password):
    world.is_logged_in = world.client.login(username=username, password=password)


@step('I am logged in')
def step_confirm_log_in(self):
    assert world.is_logged_in

Each statement is mapped to a Python function via a @step() decorator. For example, Given I empty the "User" table will trigger the step_empty_table() function to run. In this case, the string "User" will be captured and passed to the function as the model_name parameter. The Aloe API includes a special global variable called world that can be used to store and retrieve data between test steps. Notice how the world.is_logged_in variable is created in step_log_in() and then accessed in step_confirm_log_in(). Aloe also defines a special @before decorator to execute functions before tests run.

One last thing: Consider the structure of the following statement:

And I create the following users:
  | id | email             | username | password  |
  | 1  | annie@example.com | Annie    | pAssw0rd! |

With Aloe, you can represent lists of dictionaries using a tabular structure. You can then access the data using self.hashes. Wrapping self.hashes in the guess_types() function returns the list with the dictionary values correctly typed. In the case of this example, guess_types(self.hashes) returns this code.

[{'id': 1, 'email': 'annie@example.com', 'username': 'Annie', 'password': 'pAssw0rd!'}]

Run the Aloe test suite with the following command and see all tests pass.

(venv)$ python manage.py harvest
nosetests --verbosity=1
Creating test database for alias 'default'...
.
----------------------------------------------------------------------
Ran 1 test in 0.512s

OK
Destroying test database for alias 'default'...

Write a test scenario for the second part of the feature request: a user can see a list of friends.

example/features/friendships.feature

Scenario: A user can see a list of friends

  Given I empty the "Friendship" table

  When I get a list of friends

  Then I see the following response data:
    | id | email | username |

Before you run the Aloe test suite, modify the first scenario to use the keyword Background instead of Scenario. Background is a special type of scenario that is run once before every block defined by Scenario in the Feature file. Every scenario needs to start with a clean slate and using Background refreshes the data every time it is run.

example/features/friendships.feature

Feature: Friendships

  Background: Set up common data

    Given I empty the "User" table

    And I create the following users:
      | id | email             | username | password  |
      | 1  | annie@example.com | Annie    | pAssw0rd! |
      | 2  | brian@example.com | Brian    | pAssw0rd! |
      | 3  | casey@example.com | Casey    | pAssw0rd! |

    When I log in with username "Annie" and password "pAssw0rd!"

    Then I am logged in

  Scenario: A user can see a list of friends

    Given I empty the "Friendship" table

    And I create the following friendships:
      | id | user1 | user2 |
      | 1  | 1     | 2     |

    # Annie and Brian are now friends.

    When I get a list of friends

    Then I see the following response data:
      | id | email             | username |
      | 2  | brian@example.com | Brian    |

Now that you're dealing with friendships between multiple users, add a couple new user records to the database to start. The new scenario clears all entries from a "Friendship" table and creates one new record to define a friendship between Annie and Brian. Then it calls an API to retrieve a list of Annie's friends and it confirms that the response data includes Brian.

The first step is to create a Friendship model. It's simple: It just links two users together.

example/models.py

from django.conf import settings
from django.db import models


class Friendship(models.Model):
    user1 = models.ForeignKey(
      settings.AUTH_USER_MODEL,
      on_delete=models.CASCADE,
      related_name='user1_friendships'
    )
    user2 = models.ForeignKey(
      settings.AUTH_USER_MODEL,
      on_delete=models.CASCADE,
      related_name='user2_friendships'
    )

Make a migration and run it.

(venv)$ python manage.py makemigrations
(venv)$ python manage.py migrate

Next, create a new test step for the I create the following friendships: statement.

example/features/friendships_steps.py

@step('I create the following friendships:')
def step_create_friendships(self):
    Friendship.objects.bulk_create([
        Friendship(
            id=data['id'],
            user1=User.objects.get(id=data['user1']),
            user2=User.objects.get(id=data['user2'])
        ) for data in guess_types(self.hashes)
    ])

Add the Friendship model import to the file.

from ..models import Friendship

Create an API to get a list of the logged-in user's friends. Create a serializer to handle the representation of the User resource.

example/serializers.py

from django.contrib.auth.models import User
from rest_framework import serializers


class UserSerializer(serializers.ModelSerializer):
    class Meta:
        model = User
        fields = ('id', 'email', 'username',)
        read_only_fields = fields

Create a manager to handle table-level functionality for your Friendship model.

example/models.py

# New import!
from django.db.models import Q


class FriendshipManager(models.Manager):
    def friends(self, user):
        """Get all users that are friends with the specified user."""
        # Get all friendships that involve the specified user.
        friendships = self.get_queryset().select_related(
            'user1', 'user2'
        ).filter(
            Q(user1=user) |
            Q(user2=user)
        )

        def other_user(friendship):
            if friendship.user1 == user:
                return friendship.user2
            return friendship.user1

        return map(other_user, friendships)

The friends() function retrieves all of the friendships that the specified user shares with other users. Then it returns a list of those other users. Add objects = FriendshipManager() to the Friendship model.

Create a simple ListAPIView to return a JSON-serialized list of your User resources.

example/views.py

from rest_framework.generics import ListAPIView

from .models import Friendship
from .serializers import UserSerializer


class FriendsView(ListAPIView):
    serializer_class = UserSerializer

    def get_queryset(self):
        return Friendship.objects.friends(self.request.user)

Finally, add a URL path.

example_bdd/urls.py

from django.urls import path

from example.views import FriendsView

urlpatterns = [
    path('friends/', FriendsView.as_view(), name='friends'),
]

Create the remaining Python step functions: One to call your new API and another generic function to confirm response payload data. (We can reuse this function to check any payload.)

example/features/friendships_steps.py

@step('I get a list of friends')
def step_get_friends(self):
    world.response = world.client.get('/friends/')


@step('I see the following response data:')
def step_confirm_response_data(self):
    response = world.response.json()
    if isinstance(response, list):
        assert guess_types(self.hashes) == response
    else:
        assert guess_types(self.hashes)[0] == response

Run the tests and watch them pass.

(venv)$ python manage.py harvest

Think of another test scenario. Users with no friends should see an empty list when they call the API.

example/features/friendships.feature

Scenario: A user with no friends sees an empty list

  Given I empty the "Friendship" table

  # Annie has no friends.

  When I get a list of friends

  Then I see the following response data:
    | id | email | username |

No new Python functions are required. You can reuse all of your steps! Tests pass without any intervention.

You need one last piece of functionality to get this feature off the ground. Users can get a list of their friends, but how do they make new friends? Here's a new scenario: "a user should be able to add another user as a friend." Users should be able to call an API to create a friendship with another user. You know the API works if a record gets created in the database.

example/features/friendships.feature

Scenario: A user can add a friend

  Given I empty the "Friendship" table

  When I add the following friendship:
    | user1 | user2 |
    | 1     | 2     |

  Then I see the following rows in the "Friendship" table:
    | user1 | user2 |
    | 1     | 2     |

Create the new step functions.

example/features/friendships_steps.py

@step('I add the following friendship:')
def step_add_friendship(self):
    world.response = world.client.post('/friendships/', data=guess_types(self.hashes[0]))


@step('I see the following rows in the "([^"]+)" table:')
def step_confirm_table(self, model_name):
    model_class = get_model(model_name)
    for data in guess_types(self.hashes):
        has_row = model_class.objects.filter(**data).exists()
        assert has_row

Extend the manager and do some refactoring.

example/models.py

class FriendshipManager(models.Manager):
    def friendships(self, user):
        """Get all friendships that involve the specified user."""
        return self.get_queryset().select_related(
            'user1', 'user2'
        ).filter(
            Q(user1=user) |
            Q(user2=user)
        )

    def friends(self, user):
        """Get all users that are friends with the specified user."""
        friendships = self.friendships(user)

        def other_user(friendship):
            if friendship.user1 == user:
                return friendship.user2
            return friendship.user1

        return map(other_user, friendships)

Add a new serializer to render the Friendship resources.

example/serializers.py

class FriendshipSerializer(serializers.ModelSerializer):
    class Meta:
        model = Friendship
        fields = ('id', 'user1', 'user2',)
        read_only_fields = ('id',)

Add a new view.

example/views.py

class FriendshipsView(ModelViewSet):
    serializer_class = FriendshipSerializer

    def get_queryset(self):
        return Friendship.objects.friendships(self.request.user)

Add a new URL.

example/urls.py

path('friendships/', FriendshipsView.as_view({'post': 'create'})),

Your code works and the tests pass!

Analyzing the Feature

Now that you've successfully programmed and tested your feature, it's time to analyze it. Two users become friends when one user adds the other one. This is not ideal behavior. Maybe the other user doesn't want to be friends -- don't they get a say? A user should request a friendship with another user, and the other user should be able to accept or reject that friendship.

Revise the scenario where a user adds another user as a friend: "a user should be able to request a friendship with another user."

Replace Scenario: A user can add a friend with this one.

example/features/friendships.feature

Scenario: A user can request a friendship with another user

  Given I empty the "Friendship" table

  When I request the following friendship:
    | user1 | user2 |
    | 1     | 2     |

  Then I see the following response data:
    | id | user1 | user2 | status  |
    | 3  | 1     | 2     | PENDING |

Refactor your test step to use a new API, /friendship-requests/.

example/features/friendships_steps.py

@step('I request the following friendship:')
def step_request_friendship(self):
    world.response = world.client.post('/friendship-requests/', data=guess_types(self.hashes[0]))

Start by adding a new status field to the Friendship model.

example/models.py

class Friendship(models.Model):
    PENDING = 'PENDING'
    ACCEPTED = 'ACCEPTED'
    REJECTED = 'REJECTED'
    STATUSES = (
      (PENDING, PENDING),
      (ACCEPTED, ACCEPTED),
      (REJECTED, REJECTED),
    )
    objects = FriendshipManager()
    user1 = models.ForeignKey(
      settings.AUTH_USER_MODEL,
      on_delete=models.CASCADE,
      related_name='user1_friendships'
    )
    user2 = models.ForeignKey(
      settings.AUTH_USER_MODEL,
      on_delete=models.CASCADE,
      related_name='user2_friendships'
    )
    status = models.CharField(max_length=8, choices=STATUSES, default=PENDING)

Friendships can be ACCEPTED or REJECTED. If the other user has not taken action, then the default status is PENDING.

Make a migration and migrate the database.

(venv)$ python manage.py makemigrations
(venv)$ python manage.py migrate

Rename the FriendshipsView to FriendshipRequestsView.

example/views.py

class FriendshipRequestsView(ModelViewSet):
    serializer_class = FriendshipSerializer

    def get_queryset(self):
        return Friendship.objects.friendships(self.request.user)

Replace the old URL path with the new one.

example/urls.py

path('friendship-requests/', FriendshipRequestsView.as_view({'post': 'create'}))

Add new test scenarios to test the accept and reject actions.

example/features/friendships.feature

Scenario: A user can accept a friendship request

  Given I empty the "Friendship" table

  And I create the following friendships:
    | id | user1 | user2 | status  |
    | 1  | 2     | 1     | PENDING |

  When I accept the friendship request with ID "1"

  Then I see the following response data:
    | id | user1 | user2 | status   |
    | 1  | 2     | 1     | ACCEPTED |

Scenario: A user can reject a friendship request

  Given I empty the "Friendship" table

  And I create the following friendships:
    | id | user1 | user2 | status  |
    | 1  | 2     | 1     | PENDING |

  When I reject the friendship request with ID "1"

  Then I see the following response data:
    | id | user1 | user2 | status   |
    | 1  | 2     | 1     | REJECTED |

Add new test steps.

example/features/friendships_steps.py

@step('I accept the friendship request with ID "([^"]+)"')
def step_accept_friendship_request(self, pk):
    world.response = world.client.put(f'/friendship-requests/{pk}/', data={
      'status': Friendship.ACCEPTED
    })


@step('I reject the friendship request with ID "([^"]+)"')
def step_reject_friendship_request(self, pk):
    world.response = world.client.put(f'/friendship-requests/{pk}/', data={
      'status': Friendship.REJECTED
    })

Add one more URL path. Users need to target the specific friendship they want to accept or reject.

example/urls.py

path('friendship-requests/<int:pk>/', FriendshipRequestsView.as_view({'put': 'partial_update'}))

Update Scenario: A user can see a list of friends to include the new status field.

example/features/friendships.feature

Scenario: A user can see a list of friends

  Given I empty the "Friendship" table

  And I create the following friendships:
    | id | user1 | user2 | status   |
    | 1  | 1     | 2     | ACCEPTED |

  # Annie and Brian are now friends.

  When I get a list of friends

  Then I see the following response data:
    | id | email             | username |
    | 2  | brian@example.com | Brian    |

Add one more scenario after Scenario: A user can see a list of friends to test filtering on the status. A user's friends consist of people who have accepted friendship requests from the user. Those who have not taken action or who have rejected the requests are not considered.

example/features/friendships.feature

Scenario: A user with no accepted friendship requests sees an empty list

  Given I empty the "Friendship" table

  And I create the following friendships:
    | id | user1 | user2 | status   |
    | 1  | 1     | 2     | PENDING  |
    | 2  | 1     | 3     | REJECTED |

  When I get a list of friends

  Then I see the following response data:
    | id | email | username |

Edit the step_create_friendships() function to implement the status field on the Friendship model.

example/features/friendships_steps.py

@step('I create the following friendships:')
def step_create_friendships(self):
    Friendship.objects.bulk_create([
        Friendship(
            id=data['id'],
            user1=User.objects.get(id=data['user1']),
            user2=User.objects.get(id=data['user2']),
            status=data['status']
        ) for data in guess_types(self.hashes)
    ])

And also edit the FriendshipSerializer serializer to implement the status field on the Friendship model.

example/serializers.py

class FriendshipSerializer(serializers.ModelSerializer):
    class Meta:
        model = Friendship
        fields = ('id', 'user1', 'user2', 'status',)
        read_only_fields = ('id',)

Complete the filtering by adjusting the friends() method on the manager.

example/models.py

def friends(self, user):
    """Get all users that are friends with the specified user."""
    friendships = self.friendships(user).filter(status=Friendship.ACCEPTED)

    def other_user(friendship):
        if friendship.user1 == user:
            return friendship.user2
        return friendship.user1

    return map(other_user, friendships)

Feature complete!

Conclusion

If you take one thing from this post, I hope it's this: Behavior-driven development is as much about feature analysis as it is about writing, testing, and designing code. Without that crucial step, you're not creating software, you're just programming. BDD is not the only way to produce software, but it's a good one. And if you're practicing BDD with a Django project, give Aloe a try.

Grab the code from the repo.

Original article source at: https://testdriven.io/

#django #developement 

How to Behavior-Driven Development with Django and Aloe
Rupert  Beatty

Rupert Beatty

1668670920

How to Modern Test-Driven Development in Python

Testing production grade code is hard. Sometimes it can take nearly all of your time during feature development. What's more, even when you have 100% coverage and tests are green, you still may not feel confident that the new feature will work properly in production.

This guide will take you through the development of an application using Test-Driven Development (TDD). We'll look at how and what you should test. We'll use pytest for testing, pydantic to validate data and reduce the number of tests required, and Flask to provide an interface for our clients via a RESTful API. By the end, you'll have a solid pattern that you can use for any Python project so that you can have confidence that passing tests actually mean working software.

Objectives

By the end of this article, you will be able to:

  1. Explain how you should test your software
  2. Configure pytest and set up a project structure for testing
  3. Define database models with pydantic
  4. Use pytest fixtures for managing test state and performing side effects
  5. Verify JSON responses against JSON Schema definitions
  6. Organize database operations with commands (modify state, has side effects) and queries (read-only, no side effects)
  7. Write unit, integration, and end-to-end tests with pytest
  8. Explain why it's important to focus your testing efforts on testing behavior rather than implementation details

How Should I Test My Software?

Software developers tend to be very opinionated about testing. Because of this, they have differing opinions about how important testing is and ideas on how to go about doing it. That said, let's look at three guidelines that (hopefully) most developers will agree with that will help you write valuable tests:

Tests should tell you the expected behavior of the unit under test. Therefore, it's advisable to keep them short and to the point. The GIVEN, WHEN, THEN structure can help with this:

  • GIVEN - what are the initial conditions for the test?
  • WHEN - what is occurring that needs to be tested?
  • THEN - what is the expected response?

Each piece of behavior should be tested once -- and only once. Testing the same behavior more than once does not mean that your software is more likely to work. Tests need to be maintained too. If you make a small change to your code base and then twenty tests break, how do you know which functionality is broken? When only a single test fails, it's much easier to find the bug.

Each test must be independent from other tests. Otherwise, you'll have hard time maintaining and running the test suite.

This guide is opinionated too. Don't take anything as a holy grail or silver bullet. Feel free to get in touch on Twitter (@jangiacomelli) to discuss anything related to this guide.

Basic Setup

With that, let's get our hands dirty. You're ready to see what all of this means in the real world. The most simple test with pytest looks like this:

def another_sum(a, b):
    return a + b


def test_another_sum():
    assert another_sum(3, 2) == 5

That's the example that you've probably already seen at least once. First of all, you'll never write tests inside your code base so let's split this into two files and packages.

Create a new directory for this project and move into it:

$ mkdir testing_project
$ cd testing_project

Next, create (and activate) a virtual environment.

For more on managing dependencies and virtual environments, check out Modern Python Environments.

Third, install pytest:

(venv)$ pip install pytest

After that, create a new folder called "sum". Add an __init__.py to the new folder, to turn it into a package, along with a another_sum.py file:

def another_sum(a, b):
    return a + b

Add another folder named "tests" and add the following files and folders:

└── tests
    ├── __init__.py
    └── test_sum
        ├── __init__.py
        └── test_another_sum.py

You should now have:

├── sum
│   ├── __init__.py
│   └── another_sum.py
└── tests
    ├── __init__.py
    └── test_sum
        ├── __init__.py
        └── test_another_sum.py

In test_another_sum.py add:

from sum.another_sum import another_sum


def test_another_sum():
    assert another_sum(3, 2) == 5

Next, add an empty conftest.py file, which is used for storing pytest fixtures, inside the "tests" folder.

Finally, add a pytest.ini -- a pytest configuration file -- to the "tests" folder, which can also be empty as this point.

The full project structure should now look like:

├── sum
│   ├── __init__.py
│   └── another_sum.py
└── tests
    ├── __init__.py
    ├── conftest.py
    ├── pytest.ini
    └── test_sum
        ├── __init__.py
        └── test_another_sum.py

Keeping your tests together in single package allows you to:

  1. Reuse pytest configuration across all tests
  2. Reuse fixtures across all tests
  3. Simplify the running of tests

You can run all the tests with this command:

(venv)$ python -m pytest tests

You should see results of the tests, which in this case is for test_another_sum:

============================== test session starts ==============================
platform darwin -- Python 3.10.1, pytest-7.0.1, pluggy-1.0.0
rootdir: /testing_project/tests, configfile: pytest.ini
collected 1 item

tests/test_sum.py/test_another_sum.py .                                 [100%]

=============================== 1 passed in 0.01s ===============================

Real Application

Now that you have the basic idea behind how to set up and structure tests, let's build a simple blog application. We'll build it using TDD to see testing in action. We'll use Flask for our web framework and, to focus on testing, SQLite for our database.

Our app will have the following requirements:

  • articles can be created
  • articles can be fetched
  • articles can be listed

First, let's create a new project:

$ mkdir blog_app
$ cd blog_app

Second, create (and activate) a virtual environment.

Third, install pytest and pydantic, a data parsing and validation library:

(venv)$ pip install pytest && pip install "pydantic[email]"

pip install "pydantic[email]" installs pydantic along with email-validator, which will be used for validating email addresses.

Next, create the following files and folders:

blog_app
    ├── blog
    │   ├── __init__.py
    │   ├── app.py
    │   └── models.py
    └── tests
        ├── __init__.py
        ├── conftest.py
        └── pytest.ini

Add the following code to models.py to define a new Article model with pydantic:

import os
import sqlite3
import uuid
from typing import List

from pydantic import BaseModel, EmailStr, Field


class NotFound(Exception):
    pass


class Article(BaseModel):
    id: str = Field(default_factory=lambda: str(uuid.uuid4()))
    author: EmailStr
    title: str
    content: str

    @classmethod
    def get_by_id(cls, article_id: str):
        con = sqlite3.connect(os.getenv("DATABASE_NAME", "database.db"))
        con.row_factory = sqlite3.Row

        cur = con.cursor()
        cur.execute("SELECT * FROM articles WHERE id=?", (article_id,))

        record = cur.fetchone()

        if record is None:
            raise NotFound

        article = cls(**record)  # Row can be unpacked as dict
        con.close()

        return article

    @classmethod
    def get_by_title(cls, title: str):
        con = sqlite3.connect(os.getenv("DATABASE_NAME", "database.db"))
        con.row_factory = sqlite3.Row

        cur = con.cursor()
        cur.execute("SELECT * FROM articles WHERE title = ?", (title,))

        record = cur.fetchone()

        if record is None:
            raise NotFound

        article = cls(**record)  # Row can be unpacked as dict
        con.close()

        return article

    @classmethod
    def list(cls) -> List["Article"]:
        con = sqlite3.connect(os.getenv("DATABASE_NAME", "database.db"))
        con.row_factory = sqlite3.Row

        cur = con.cursor()
        cur.execute("SELECT * FROM articles")

        records = cur.fetchall()
        articles = [cls(**record) for record in records]
        con.close()

        return articles

    def save(self) -> "Article":
        with sqlite3.connect(os.getenv("DATABASE_NAME", "database.db")) as con:
            cur = con.cursor()
            cur.execute(
                "INSERT INTO articles (id,author,title,content) VALUES(?, ?, ?, ?)",
                (self.id, self.author, self.title, self.content)
            )
            con.commit()

        return self

    @classmethod
    def create_table(cls, database_name="database.db"):
        conn = sqlite3.connect(database_name)

        conn.execute(
            "CREATE TABLE IF NOT EXISTS articles (id TEXT, author TEXT, title TEXT, content TEXT)"
        )
        conn.close()

This is an Active Record-style model, which provides methods for storing, fetching a single article, and listing all articles.

You may be wondering why we didn't write tests to cover the model. We'll get to the why shortly.

Create a New Article

Next, let's cover our business logic. We'll write some helper commands and queries to separate our logic from the model and API. Since we're using pydantic, we can easily validate data based on our model.

Create a "test_article" package in the "tests" folder. Then, add a file called test_commands.py to it.

blog_app
    ├── blog
    │   ├── __init__.py
    │   ├── app.py
    │   └── models.py
    └── tests
        ├── __init__.py
        ├── conftest.py
        ├── pytest.ini
        └── test_article
            ├── __init__.py
            └── test_commands.py

Add the following tests to test_commands.py:

import pytest

from blog.models import Article
from blog.commands import CreateArticleCommand, AlreadyExists


def test_create_article():
    """
    GIVEN CreateArticleCommand with valid author, title, and content properties
    WHEN the execute method is called
    THEN a new Article must exist in the database with the same attributes
    """
    cmd = CreateArticleCommand(
        author="john@doe.com",
        title="New Article",
        content="Super awesome article"
    )

    article = cmd.execute()

    db_article = Article.get_by_id(article.id)

    assert db_article.id == article.id
    assert db_article.author == article.author
    assert db_article.title == article.title
    assert db_article.content == article.content


def test_create_article_already_exists():
    """
    GIVEN CreateArticleCommand with a title of some article in database
    WHEN the execute method is called
    THEN the AlreadyExists exception must be raised
    """

    Article(
        author="jane@doe.com",
        title="New Article",
        content="Super extra awesome article"
    ).save()

    cmd = CreateArticleCommand(
        author="john@doe.com",
        title="New Article",
        content="Super awesome article"
    )

    with pytest.raises(AlreadyExists):
        cmd.execute()

These tests cover the following business use cases:

  • articles should be created for valid data
  • article title must be unique

Run the tests from your project directory to see that they fail:

(venv)$ python -m pytest tests

Now we can implement our command.

Add a commands.py file to the "blog" folder:

from pydantic import BaseModel, EmailStr

from blog.models import Article, NotFound


class AlreadyExists(Exception):
    pass


class CreateArticleCommand(BaseModel):
    author: EmailStr
    title: str
    content: str

    def execute(self) -> Article:
        try:
            Article.get_by_title(self.title)
            raise AlreadyExists
        except NotFound:
            pass

        article = Article(
            author=self.author,
            title=self.title,
            content=self.content
        ).save()

        return article

Test Fixtures

We can use pytest fixtures to clear the database after each test and create a new one before each test. Fixtures are functions decorated with a @pytest.fixture decorator. They are usually located inside conftest.py but they can be added to the actual test files as well. These functions are executed by default before each test.

One option is to use their returned values inside your tests. For example:

import random
import pytest


@pytest.fixture
def random_name():
    names = ["John", "Jane", "Marry"]
    return random.choice(names)


def test_fixture_usage(random_name):
    assert random_name

So, to use the value returned from the fixture inside the test you just need to add the name of the fixture function as a parameter to the test function.

Another option is to perform a side effect, like creating a database or mocking a module.

You can also run part of a fixture before and part after a test using yield instead of return. For example:

@pytest.fixture
def some_fixture():
    # do something before your test
    yield # test runs here
    # do something after your test

Now, add the following fixture to conftest.py, which creates a new database before each test and removes it after:

import os
import tempfile

import pytest

from blog.models import Article


@pytest.fixture(autouse=True)
def database():
    _, file_name = tempfile.mkstemp()
    os.environ["DATABASE_NAME"] = file_name
    Article.create_table(database_name=file_name)
    yield
    os.unlink(file_name)

The autouse flag is set to True so that it's automatically used by default before (and after) each test in the test suite. Since we're using a database for all tests it makes sense to use this flag. That way you don't have to explicitly add the fixture name to every test as a parameter.

If you do happen to not need access to the database for a test here and there you can disable autouse with a test marker. You can see an example of this here.

Run the tests again:

(venv)$ python -m pytest tests

They should pass.

As you can see, our test only tests the CreateArticleCommand command. We don't test the actual Article model since it's not responsible for business logic. We know that the command works as expected. Therefore, there's no need to write any additional tests.

List All Articles

The next requirement is to list all articles. We'll use a query instead of command here, so add a new file called test_queries.py to the "test_article" folder:

from blog.models import Article
from blog.queries import ListArticlesQuery


def test_list_articles():
    """
    GIVEN 2 articles stored in the database
    WHEN the execute method is called
    THEN it should return 2 articles
    """
    Article(
        author="jane@doe.com",
        title="New Article",
        content="Super extra awesome article"
    ).save()
    Article(
        author="jane@doe.com",
        title="Another Article",
        content="Super awesome article"
    ).save()

    query = ListArticlesQuery()

    assert len(query.execute()) == 2

Run the tests:

(venv)$ python -m pytest tests

They should fail.

Add a queries.py file to the "blog" folder:

blog_app
    ├── blog
    │   ├── __init__.py
    │   ├── app.py
    │   ├── commands.py
    │   ├── models.py
    │   └── queries.py
    └── tests
        ├── __init__.py
        ├── conftest.py
        ├── pytest.ini
        └── test_article
            ├── __init__.py
            ├── test_commands.py
            └── test_queries.py

Now we can implement our query:

from typing import List

from pydantic import BaseModel

from blog.models import Article


class ListArticlesQuery(BaseModel):

    def execute(self) -> List[Article]:
        articles = Article.list()

        return articles

Despite having no parameters here, for consistency we inherited from BaseModel.

Run the tests again:

(venv)$ python -m pytest tests

They should now pass.

Get Article by ID

Getting a single article by its ID can be done in similar way as listing all articles. Add a new test for GetArticleByIDQuery to test_queries.py.:

from blog.models import Article
from blog.queries import ListArticlesQuery, GetArticleByIDQuery


def test_list_articles():
    """
    GIVEN 2 articles stored in the database
    WHEN the execute method is called
    THEN it should return 2 articles
    """
    Article(
        author="jane@doe.com",
        title="New Article",
        content="Super extra awesome article"
    ).save()
    Article(
        author="jane@doe.com",
        title="Another Article",
        content="Super awesome article"
    ).save()

    query = ListArticlesQuery()

    assert len(query.execute()) == 2


def test_get_article_by_id():
    """
    GIVEN ID of article stored in the database
    WHEN the execute method is called on GetArticleByIDQuery with an ID
    THEN it should return the article with the same ID
    """
    article = Article(
        author="jane@doe.com",
        title="New Article",
        content="Super extra awesome article"
    ).save()

    query = GetArticleByIDQuery(
        id=article.id
    )

    assert query.execute().id == article.id

Run the tests to ensure they fail:

(venv)$ python -m pytest tests

Next, add GetArticleByIDQuery to queries.py:

from typing import List

from pydantic import BaseModel

from blog.models import Article


class ListArticlesQuery(BaseModel):

    def execute(self) -> List[Article]:
        articles = Article.list()

        return articles


class GetArticleByIDQuery(BaseModel):
    id: str

    def execute(self) -> Article:
        article = Article.get_by_id(self.id)

        return article

The tests should now pass:

(venv)$ python -m pytest tests

Nice. We've meet all of the above mentioned requirements:

  • articles can be created
  • articles can be fetched
  • articles can be listed

And they're all covered with tests. Since we're using pydantic for data validation at runtime, we don't need a lot of tests to cover the business logic as we don't need to write tests for validating data. If author is not a valid email, pydantic will raise an error. All that was needed was to set the author attribute to the EmailStr type. We don't need to test it either because it's already being tested by the pydantic maintainers.

With that, we're ready to expose this functionality to the world via a Flask RESTful API.

Expose the API with Flask

We'll introduce three endpoints that cover this requirement:

  1. /create-article/ - create a new article
  2. /article-list/ - retrieve all articles
  3. /article/<article_id>/ - fetch a single article

First, create a folder called "schemas" inside "test_article", and add two JSON schemas to it, Article.json and ArticleList.json.

Article.json:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "Article",
  "type": "object",
  "properties": {
    "id": {
      "type": "string"
    },
    "author": {
      "type": "string"
    },
    "title": {
      "type": "string"
    },
    "content": {
      "type": "string"
    }
  },
  "required": ["id", "author", "title", "content"]
}

ArticleList.json:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "ArticleList",
  "type": "array",
  "items": {"$ref":  "file:Article.json"}
}

JSON Schemas are used to define the responses from API endpoints. Before continuing, install the jsonschema Python library, which will be used to validate JSON payloads against the defined schemas, and Flask:

(venv)$ pip install jsonschema Flask

Next, let's write integration tests for our API.

Add a new file called test_app.py to "test_article":

import json
import pathlib

import pytest
from jsonschema import validate, RefResolver

from blog.app import app
from blog.models import Article


@pytest.fixture
def client():
    app.config["TESTING"] = True

    with app.test_client() as client:
        yield client


def validate_payload(payload, schema_name):
    """
    Validate payload with selected schema
    """
    schemas_dir = str(
        f"{pathlib.Path(__file__).parent.absolute()}/schemas"
    )
    schema = json.loads(pathlib.Path(f"{schemas_dir}/{schema_name}").read_text())
    validate(
        payload,
        schema,
        resolver=RefResolver(
            "file://" + str(pathlib.Path(f"{schemas_dir}/{schema_name}").absolute()),
            schema  # it's used to resolve the file inside schemas correctly
        )
    )


def test_create_article(client):
    """
    GIVEN request data for new article
    WHEN endpoint /create-article/ is called
    THEN it should return Article in json format that matches the schema
    """
    data = {
        'author': "john@doe.com",
        "title": "New Article",
        "content": "Some extra awesome content"
    }
    response = client.post(
        "/create-article/",
        data=json.dumps(
            data
        ),
        content_type="application/json",
    )

    validate_payload(response.json, "Article.json")


def test_get_article(client):
    """
    GIVEN ID of article stored in the database
    WHEN endpoint /article/<id-of-article>/ is called
    THEN it should return Article in json format that matches the schema
    """
    article = Article(
        author="jane@doe.com",
        title="New Article",
        content="Super extra awesome article"
    ).save()
    response = client.get(
        f"/article/{article.id}/",
        content_type="application/json",
    )

    validate_payload(response.json, "Article.json")


def test_list_articles(client):
    """
    GIVEN articles stored in the database
    WHEN endpoint /article-list/ is called
    THEN it should return list of Article in json format that matches the schema
    """
    Article(
        author="jane@doe.com",
        title="New Article",
        content="Super extra awesome article"
    ).save()
    response = client.get(
        "/article-list/",
        content_type="application/json",
    )

    validate_payload(response.json, "ArticleList.json")

So, what's happening here?

  1. First, we defined the Flask test client as a fixture so that it can be used in the tests.
  2. Next, we added a function for validating payloads. It takes two parameters:
    1. payload - JSON response from the API
    2. schema_name - name of the schema file inside the "schemas" directory
  3. Finally, there are three tests, one for each endpoint. Inside each test there's a call to the API and validation of the returned payload

Run the tests to ensure they fail at this point:

(venv)$ python -m pytest tests

Now we can write the API.

Update app.py like so:

from flask import Flask, jsonify, request

from blog.commands import CreateArticleCommand
from blog.queries import GetArticleByIDQuery, ListArticlesQuery

app = Flask(__name__)


@app.route("/create-article/", methods=["POST"])
def create_article():
    cmd = CreateArticleCommand(
        **request.json
    )
    return jsonify(cmd.execute().dict())


@app.route("/article/<article_id>/", methods=["GET"])
def get_article(article_id):
    query = GetArticleByIDQuery(
        id=article_id
    )
    return jsonify(query.execute().dict())


@app.route("/article-list/", methods=["GET"])
def list_articles():
    query = ListArticlesQuery()
    records = [record.dict() for record in query.execute()]
    return jsonify(records)


if __name__ == "__main__":
    app.run()

Our route handlers are pretty simple since all of our logic is covered by the commands and queries. Available actions with side effects (like mutations) are represented by commands -- e.g., creating a new article. On the other hand, actions that don't have side effects, the ones that are just reading current state, are covered by queries.

The command and query pattern used in this post is a simplified version of the CQRS pattern. We're combining CQRS and CRUD.

The .dict() method above is provided by the BaseModel from pydantic, which all of our models inherit from.

The tests should pass:

(venv)$ python -m pytest tests

We've covered the happy path scenarios. In the real world we must expect that clients won't always use the API as it was intended. For example, when a request to create an article is made without a title, a ValidationError will be raised by the CreateArticleCommand command, which will result in an internal server error and an HTTP status 500. That's something that we want to avoid. Therefore, we need to handle such errors to notify the user about the bad request gracefully.

Let's write tests to cover such cases. Add the following to test_app.py:

@pytest.mark.parametrize(
    "data",
    [
        {
            "author": "John Doe",
            "title": "New Article",
            "content": "Some extra awesome content"
        },
        {
            "author": "John Doe",
            "title": "New Article",
        },
        {
            "author": "John Doe",
            "title": None,
            "content": "Some extra awesome content"
        }
    ]
)
def test_create_article_bad_request(client, data):
    """
    GIVEN request data with invalid values or missing attributes
    WHEN endpoint /create-article/ is called
    THEN it should return status 400
    """
    response = client.post(
        "/create-article/",
        data=json.dumps(
            data
        ),
        content_type="application/json",
    )

    assert response.status_code == 400
    assert response.json is not None

We used pytest's parametrize option, which simplifies passing in multiple inputs to a single test.

Test should fail at this point because we haven't handled the ValidationError yet:

(venv)$ python -m pytest tests

So let's add an error handler to the Flask app inside app.py:

from pydantic import ValidationError

# Other code ...

app = Flask(__name__)


@app.errorhandler(ValidationError)
def handle_validation_exception(error):
    response = jsonify(error.errors())
    response.status_code = 400
    return response

# Other code ...

ValidationError has an errors method that returns a list of all errors for each field that was either missing or passed a value that didn't pass validation. We can simply return this in the body and set the response's status to 400.

Now that the error is handled appropriately all tests should pass:

(venv)$ python -m pytest tests

Code Coverage

Now, with our application tested, it's the time to check code coverage. So let's install a pytest plugin for coverage called pytest-cov:

(venv)$ pip install pytest-cov

After the plugin is installed, we can check code coverage of our blog application like this:

(venv)$ python -m pytest tests --cov=blog

You should see something similar to:

---------- coverage: platform darwin, python 3.10.1-final-0 ----------
Name               Stmts   Miss  Cover
--------------------------------------
blog/__init__.py       0      0   100%
blog/app.py           25      1    96%
blog/commands.py      16      0   100%
blog/models.py        57      1    98%
blog/queries.py       12      0   100%
--------------------------------------
TOTAL                110      2    98%

Is 98% coverage good enough? It probably is. Nonetheless, remember one thing: High coverage percentage is great but the quality of your tests is much more important. If only 70% or less of code is covered you should think about increasing coverage percentage. But it generally doesn't make sense to write tests to go from 98% to 100%. (Again, tests need to be maintained just like your business logic!)

End-to-end Tests

We have a working API at this point that's fully tested. We can now look at how to write some end-to-end (e2e) tests. Since we have a simple API we can write a single e2e test to cover the following scenario:

  1. create a new article
  2. list articles
  3. get the first article from the list

First, install the requests library:

(venv)$ pip install requests

Second, add a new test to test_app.py:

import requests
# other code ...

@pytest.mark.e2e
def test_create_list_get(client):
    requests.post(
        "http://localhost:5000/create-article/",
        json={
            "author": "john@doe.com",
            "title": "New Article",
            "content": "Some extra awesome content"
        }
    )
    response = requests.get(
        "http://localhost:5000/article-list/",
    )

    articles = response.json()

    response = requests.get(
        f"http://localhost:5000/article/{articles[0]['id']}/",
    )

    assert response.status_code == 200

There are two things that we need to do before running this test...

First, register a marker called e2e with pytest by adding the following code to pytest.ini:

[pytest]
markers =
    e2e: marks tests as e2e (deselect with '-m "not e2e"')

pytest markers are used to exclude some tests from running or to include selected tests independent of their location.

To run only the e2e tests, run:

(venv)$ python -m pytest tests -m 'e2e'

To run all tests except e2e:

(venv)$ python -m pytest tests -m 'not e2e'

e2e tests are more expensive to run and require the app to be up and running, so you probably don't want to run them at all times.

Since our e2e test hits a live server, we'll need to spin up the app. Navigate to the project in a new terminal window, activate the virtual environment, and run the app:

(venv)$ FLASK_APP=blog/app.py python -m flask run

Now we can run our e2e test:

(venv)$ python -m pytest tests -m 'e2e'

You should see a 500 error. Why? Don't the unit tests pass? Yes. The problem is that we didn't create the database table. We used fixtures for this in our tests which do this for us. So let's create a table and a database.

Add an init_db.py file to the "blog" folder:

if __name__ == "__main__":
    from blog.models import Article
    Article.create_table()

Run the new script and start the server again:

(venv)$ python blog/init_db.py
(venv)$ FLASK_APP=blog/app.py python -m flask run

If you run into any problems running init_db.py, you may need to set the Python path: export PYTHONPATH=$PYTHONPATH:$PWD.

The test should now pass:

(venv)$ python -m pytest tests -m 'e2e'

Testing Pyramid

We started with unit tests (to test the commands and queries) followed by integration tests (to test the API endpoints), and finished with e2e tests. In simple applications, as in this example, you may end up with a similar number of unit and integration tests. In general, the greater the complexity, the more you should see a pyramid-like shape in terms of the relationship between unit, integration, and e2e tests. That's where the "test pyramid" term comes from.

The Test Pyramid is a framework that can help developers create high-quality software.

Test pyramid

Using the Test Pyramid as a guide, you typically want 50% of your tests in your test suite to be unit tests, 30% to be integration tests, and 20% to be e2e tests.

Definitions:

  • Unit test - tests a single unit of code
  • Integration tests - tests that multiple units work together
  • e2e - tests the whole application against a live production-like server

The higher up you go in the pyramid, the more brittle and less predictable your tests are. What's more, e2e tests are by far the slowest to run so even though they can bring confidence that your application is doing what's expected of it, you shouldn't have nearly as many of them as unit or integration tests.

What is a Unit?

It's pretty straightforward what integration and e2e tests look like. There's much more discussion about unit tests since you first have to define what a "unit" actually is. Most testing tutorials show a unit test example that tests a single function or method. Production code is never that simple.

First things first, before defining what a unit is, let's look at what the point of testing is in general and what should be tested.

Why Test?

We write tests to:

  1. Ensure our code works as expected
  2. Protect our software against regressions

Nonetheless, when feedback cycles are too long, developers tend to start to think more about the types of tests to write since time is a major constraint in software development. That's why we want to have more unit tests than other types of tests. We want to find and fix the defect as fast as possible.

What to Test?

Now that you know why we should test, we now must look at what we should test.

We should test the behavior of our software. (And, yes: This still applies to TDD, not just BDD.) This is because you shouldn't have to change your tests every time there's a change to the code base.

Think back to the example of the real world application. From a testing perspective, we don't care where the articles are stored. It could be a text file, some other relational database, or a key/value store -- it doesn't matter. Again, our app had the following requirements:

  • articles can be created
  • articles can be fetched
  • articles can be listed

As long as those requirements don't change, a change to the storage medium shouldn't break our tests. Similarly, we know that as long as those tests pass, we know our software meets those requirements -- so it's working.

So What is a Unit Then?

Each function/method is technically a unit, but we still shouldn't test every single one of them. Instead, focus your energy on testing the functions and methods that are publicly exposed from a module/package.

In our case, these were the execute methods. We don't expect to call the Article model directly from the Flask API, so don't focus much (if any) energy on testing it. To be more precise, in our case, the "units", that should be tested, are the execute methods from the commands and queries. If some method is not intended to be directly called from other parts of our software or an end user, it's probably implementation detail. Consequently, our tests are resistant to refactoring to the implementation details, which is one of the qualities of great tests.

For example, our tests should still pass if we wrapped the logic for get_by_id and get_by_title in a "protected" method called _get_by_attribute:

# other code ...

class Article(BaseModel):
    id: str = Field(default_factory=lambda: str(uuid.uuid4()))
    author: EmailStr
    title: str
    content: str

    @classmethod
    def get_by_id(cls, article_id: str):
        return cls._get_by_attribute("SELECT * FROM articles WHERE id=?", (article_id,))

    @classmethod
    def get_by_title(cls, title: str):
        return cls._get_by_attribute("SELECT * FROM articles WHERE title = ?", (title,))

    @classmethod
    def _get_by_attribute(cls, sql_query: str, sql_query_values: tuple):
        con = sqlite3.connect(os.getenv("DATABASE_NAME", "database.db"))
        con.row_factory = sqlite3.Row

        cur = con.cursor()
        cur.execute(sql_query, sql_query_values)

        record = cur.fetchone()

        if record is None:
            raise NotFound

        article = cls(**record)  # Row can be unpacked as dict
        con.close()

        return article

# other code ..

On the other hand, if you make a breaking change inside Article the tests will fail. And that's exactly what we want. In that situation, we can either revert the breaking change or adapt to it inside our command or query.

Because there's one thing that we're striving for: Passing tests means working software.

When Should You Use Mocks?

We didn't use any mocks in our tests, because we didn't need them. Mocking methods or classes inside your modules or packages produces tests that are not resistant to refactoring because they are coupled to the implementation details. Such tests break often and are costly to maintain. On the other hand, it makes sense to mock external resources when speed is an issue (calls to external APIs, sending emails, long-running async processes, etc.).

For example, we could test the Article model separately and mock it inside our tests for CreateArticleCommand like so:

def test_create_article(monkeypatch):
    """
    GIVEN CreateArticleCommand with valid properties author, title and content
    WHEN the execute method is called
    THEN a new Article must exist in the database with same attributes
    """
    article = Article(
        author="john@doe.com",
        title="New Article",
        content="Super awesome article"
    )
    monkeypatch.setattr(
        Article,
        "save",
        lambda self: article
    )
    cmd = CreateArticleCommand(
        author="john@doe.com",
        title="New Article",
        content="Super awesome article"
    )

    db_article = cmd.execute()

    assert db_article.id == article.id
    assert db_article.author == article.author
    assert db_article.title == article.title
    assert db_article.content == article.content

Yes, that's perfectly fine to do, but we now have more tests to maintain -- i.e., all the tests from before plus all the new tests for the methods in Article. Besides that, the only thing that's now tested by test_create_article is that an article returned from save is the same as the one returned by execute. When we break something inside Article this test will still pass because we mocked it. And that's something we want to avoid: We want to test software behavior to ensure that it works as expected. In this case, behavior is broken but our test won't show that.

Takeaways

  1. There's no single right way to test your software. Nonetheless, it's easier to test logic when it's not coupled with your database. You can use the Active Record pattern with commands and queries (CQRS) to help with this.
  2. Focus on the business value of your code.
  3. Don't test methods just to say they're tested. You need working software not tested methods. TDD is just a tool to deliver better software faster and more reliable. Similar can be said for code coverage: Try to keep it high but don't add tests just to have 100% coverage.
  4. A test is valuable only when it protects you against regressions, allows you to refactor, and provides you fast feedback. Therefore, you should strive for your tests to resemble a pyramid shape (50% unit, 30% integration, 20% e2e). Although, in simple applications, it may look more like a house (40% unit, 40% integration, 20% e2e), which is fine.
  5. The faster you notice regressions, the faster you can intercept and correct them. The faster you correct them, the shorter the development cycle. To speed up feedback, you can use pytest markers to exclude e2e and other slow tests during development. You can run them less frequently.
  6. Use mocks only when necessary (like for third-party HTTP APIs). They make your test setup more complicated and your tests overall less resistant to refactoring. Plus, they can result in false positives.
  7. Once again, your tests are a liability not an asset; they should cover your software's behavior but don't over test.

Conclusion

There's a lot to digest here. Keep in mind that these are just examples used to show the ideas. You can use the same ideas with Domain-driven design (DDD), Behavior-driven design (BDD), and many other approaches. Keep in mind that tests should be treated the same as any other code: They are a liability and not an asset. Write tests to protect your software against the bugs but don't let it burn your time.

Want to learn more?

The Complete Python Guide:

  1. Modern Python Environments - dependency and workspace management
  2. Testing in Python
  3. Modern Test-Driven Development in Python (this article!)
  4. Python Code Quality
  5. Python Type Checking
  6. Documenting Python Code and Projects
  7. Python Project Workflow

Original article source at: https://testdriven.io/

#python #test #developement #flask 

How to Modern Test-Driven Development in Python

Gallium's AST interpreter As Separate Package to Simplify Development

ASTInterpreter

The AST Interpreter component of Gallium (i.e. does not include any breakpoint, stuff, etc.). This is a development prototype and comes with it's own debug prompt for that purpose.

Usage:

using ASTInterpreter

function foo(n)
    x = n+1
    ((BigInt[1 1; 1 0])^x)[2,1]
end

interp = enter(foo, Environment(Dict(:n => 20),Dict{Symbol,Any}()))
ASTInterpreter.RunDebugREPL(interp)

Basic Commands:

  • n steps to the next line
  • s steps into the next call
  • finish runs to the end of the function
  • bt shows a simple backtrace
  • `stuff runs stuff in the current frame's context
  • fr v will show all variables in the current frame
  • f n where n is an integer, will go to the n-th frame.

Advanced commands:

  • nc steps to the next call
  • ns steps to the next statement
  • se does one expression step
  • si does the same but steps into a call if a call is the next expression
  • sg steps into a generated function
  • shadow shows the internal representation of the expression tree (for debugger debugging only)
  • loc shows the column data for the current top frame, in the same format as JuliaParsers's testshell.

This is a prototype, do not expect it to be correct or usable.

Experimental mode

There is an experimental UI mode accessible by setting ASTInterpreter.fancy_mode = true, which attempts to provide a better interface but is not currently not capable of handling all julia code. Use at your own peril.

Current Dependencies

Pkg.clone("https://github.com/JuliaLang/Reactive.jl.git")
Pkg.clone("https://github.com/JuliaLang/JuliaParser.jl.git")
Pkg.clone("https://github.com/Keno/TerminalUI.jl.git")
Pkg.clone("https://github.com/Keno/VT100.jl.git")
Pkg.clone("https://github.com/Keno/AbstractTrees.jl.git")
Pkg.clone("https://github.com/Keno/LineNumbers.jl.git")
Pkg.clone("https://github.com/Keno/ASTInterpreter.jl.git")

Download Details:

Author: Keno
Source Code: https://github.com/Keno/ASTInterpreter.jl 
License: View license

#julia #developement 

Gallium's AST interpreter As Separate Package to Simplify Development
Rupert  Beatty

Rupert Beatty

1667095380

ImagineEngine: A Project to Create A Blazingly Fast Swift Game Engine

ImagineEngine

Welcome to Imagine Engine, an ongoing project that aims to create a fast, high performance Swift 2D game engine for Apple's platforms that is also a joy to use. You are hereby invited to participate in this new community to build a tool with an ambitious but clear goal - to enable you to easily build any game that you can imagine.

Fast Core Animation-based rendering

Imagine Engine uses Core Animation as its rendering backend - just like Apple's UI frameworks like UIKit and AppKit do. By leveraging the power of Core Animation's hardware accelerated 2D rendering capabilities, Imagine Engine is able to push lots of pixels onto the screen at the same time. That means more objects, more effects and less restrictions when designing your games.

An easy to use API

Besides its goal of being blazingly fast at rendering & updating your games, Imagine Engine aims to provide an easy to use API that anyone can learn - regardless of game development experience.

Start with just a few lines of code...

let scene = Scene(size: UIScreen.main.bounds.size)

let label = Label(text: "Hello world")
label.position = scene.center
scene.add(label)

let window = GameWindow(scene: scene)
window.makeKeyAndVisible()

...and smoothly scale up as your game grows in complexity on either iOS, macOS or tvOS.

🌃 Scenes present your game content

A scene can be a level, a menu or a "Game over" screen. You can easily switch the active scene of a game. Here's how you can create a scene with a blue background color:

let scene = Scene(size: Size(width: 500, height: 300))
scene.backgroundColor = .blue
game.scene = scene

🎭 Actors bring your game to life

Actors are what will make up most of the active objects in any game. They are movable, animatable, can handle collisions and much more. Here's an example of how you can create a player that renders a "Running" animation, and constantly moves to the right:

let player = Actor()
player.animation = Animation(name: "Running", frameCount: 5, frameDuration: 0.15)
player.velocity.dx = 50
scene.add(player)

📦 Easily create platforms and tiled textures with Blocks

Using blocks you can easily tile textures together to form objects that can scale nicely to any size, without having to scale any texture. This is done by stitching together up to 9 different textures to form a block of textures rendered side by side. Here's how you can easily create a block from a folder named "Platform" that contains the textures that should be stitched together:

let block = Block(size: Size(width: 300, height: 300), textureCollectionName: "Platform")
scene.add(block)

🅰️ Render text using Labels

Labels let you add text content to your game. They automatically resize to fit your text content (unless you don't want them to) and can be used to implement things like UI, score counters, etc. Here's an example of adding a label to a scene:

let label = Label(text: "Welcome to my game!")
label.position = scene.center
scene.add(label)

⚡️ Use Events to drive your game logic

Events enable you to quickly script your games to drive your own logic. Imagine Engine's various objects contain built in events that can be used to observe whenever an object was moved, collided with something, etc. You can also define your own events that can be used to communicate between various parts of your code. Here's how you can observe whenever two actors collided with each other:

let player = Actor()
let enemy = Actor()

player.events.collided(with: enemy).observe {
    // Game over
}

🏃 Create animations and effects using Actions

Actions let you make objects do something over a period of time, for example moving, resizing, fading in and out etc. Imagine Engine contains a suite of built-in actions and also makes it easy for you to define your own. Here's how an actor can be moved over 3 seconds:

let actor = Actor()
scene.add(actor)
actor.move(byX: 200, y: 100, duration: 3)

🔌 Easily extend Imagine Engine with Plugins

Instead of relying on subclassing and overriding methods, Imagine Engine is designed to be easily extended through plugins. This enables you to share code between different games, and create new open source projects that add new functionality to the engine. You can attach plugins to most of Imagine Engine's objects, here's an example of creating a plugin that creates a new actor every time the scene is clicked or tapped:

class MyPlugin: Plugin {
    func activate(for scene: Scene, in game: Game) {
        scene.events.clicked.observe { scene in
            let actor = Actor()
            actor.position = scene.center
            scene.add(actor)
        }
    }
}

🕐 Precise timing using Timelines

Managing time and delayed events can sometimes be tricky in game development. Imagine Engine aims to make this a lot easier through its timeline API, that enables you to schedule single or repeated events in the future without having to worry about screen updates or if the game is paused. Here's how you can add an event to spawn a new enemy every 5 seconds:

scene.timeline.repeat(withInterval: 5) {
    let enemy = Actor()
    enemy.animation = Animation(name: "Enemy", frameCount: 5, frameDuration: 0.15)
    scene.add(enemy)
}

Platform support

  •  📱 iOS 9 or later
  •  🖥 macOS 10.12 or later
  •  📺 tvOS 10 or later

Imagine Engine supports all of Apple's platforms except watchOS. The API is also completely cross platform, so that you don't have to scatter #ifs all over your game code.

Xcode templates

Imagine Engine ships with Xcode project templates that makes it super easy to get started with a new project. You can find more information & installation instructions here.

Let's get started!

To get started, check out the tutorials section, which contains tutorials that will walk you through building your first Imagine Engine-powered games with very few lines of code. No previous game developer experience required!

If you need help getting started or have a question about Imagine Engine, feel free to open an issue! We're a friendly community who would love to get more people involved.

Imagine Engine is in active development, with new features being constantly added. Need something new, or want to help out making the engine even more capable? Browse and create new issues or open a PR.

Lets build some awesome games together! 🚀

Download Details:

Author: JohnSundell
Source Code: https://github.com/JohnSundell/ImagineEngine 
License: View license

#swift #gameengine #developement 

ImagineEngine: A Project to Create A Blazingly Fast Swift Game Engine
Reid  Rohan

Reid Rohan

1660551840

Ganache-ui: Personal Blockchain for Ethereum Development

Ganache

Ganache is your personal blockchain for Ethereum development.

Getting started

You can download a self-contained prebuilt Ganache binary for your platform of choice using the "Download" button on the Ganache website, or from this repository's releases page.

Contributing

Please open issues and pull requests for new features, questions, and bug fixes.

Requirements:

  • node v12.13.1

To get started:

  1. Clone this repo
  2. Run npm install
  3. Run npm run dev

If using Windows, you may need windows-build-tools installed first.

Building for All Platforms

Each platform has an associated npm run configuration to help you build on each platform more easily. Because each platform has different (but similar) build processes, they require different configuration. Note that both Windows and Mac require certificates to sign the built packages; for security reasons these certs aren't uploaded to github, nor are their passwords saved in source control.

On Windows:

Building on Windows will create a .appx file for use with the Windows Store.

Before building, create the ./certs directory with the following files:

  • ./certs/cert.pfx - Note a .pfx file is identical to a .p12. (Just change the extension if you've been given a .p12.)

In order to build on Windows, you must first ensure you have the Windows 10 SDK installed. If you have errors during the build process, ensure the package.json file's windowsStoreConfig.windowsKit points to your Windows 10 SDK directory. The one specified in the package.json file currently is what worked at the time this process was figured out; it may need to be updated periodically.

Because Windows requires a certificate to build the package -- and that certificate requires a password -- you'll need to run the following command instead of npm run make:

$ CERT_PASS="..." npm run build-windows

Replace ... in the command above with your certificate password.

This will create a .appx file in ./out/make.

On Mac:

Building on a Mac will create a standard Mac .dmg file.

Before building on a Mac, make sure you have Truffle's signing keys added to your keychain. Next, run the following command:

$ npm run build-mac

This will create a signed .dmg file in ./out/make.

On Linux:

Bulding on Linux will create a .AppImage file, meant to run on many versions of Linux.

Linux requires no signing keys, so there's no set up. Simply run the following command:

$ npm run build-linux

This will create a .AppImage file in ./out/make.

Generating Icon Assets

Asset generation generally only needs to happen once, or whenever the app's logo is updated. If you find you need to rebuild the assets, the following applications were used:

Two tools were used:

electron-icon-maker generates assets for all platforms when using Electron's squirrel package, and these assets live in ./static/icons. svg2uwptiles generates all assets needed for the Windows appx build, and those assets live in ./build/appx. These locations can be changed in the future, but make sure to change the associated configuration pointing to these assets.

Note from the author: I found managing these assets manually -- especially the appx assets -- was a pain. If possible, try not to edit the assets themselves and use one of the generators above.

Flavored Development

"Extras" aren't stored here in this repository fordue to file size issues, licensing issues, or both.

Non-ethereum "flavored" Ganache extras are uploaded to releases here: https://github.com/trufflesuite/ganache-flavors/releases

When "extras" change they should be uploaded to a new release, and a corresonding Ganache release that targets the new ganache-flavors release (see common/extras/index.js for what you'dd need to update)

Corda

Corda requires 4 "extras" that get downloaded at runtime.

braid-server.jar is used to communicate to corda nodes via JSON RPC over HTTP. This file is built from https://gitlab.com/bluebank/braid/tree/master/braid-server. To build: run mvn clean install in the root of the project.

corda-tools-network-bootstrapper-4.3.jar is used to create corda networks from configuration (_node.conf) files. It contains an embedded corda.jar and the logic required to create a network. To update or download the latest corda-tools-network-bootstrapper go to https://software.r3.com/artifactory/corda-releases/net/corda/ and download the version you want. You'll need to update the file name in src/common/extras/index.js if the version changes.

Corda and braid require Java's JRE 1.8, aka 8. We "release" 4 versions of JRE 1.8: Linux x64, Mac x64, Windows x32, and Windows x64. The Java releases are downloaded from https://adoptopenjdk.net/archive.html -- we use "OpenJDK 8 (LTS)" with "HotSpot". To redistribute these files you will need to unpack/unzip them, then zip them up again (make sure you are on Linux for the Linux release, as it needs its file permissions properly embedded within the zip). It is very important that you ensure that all files are stored at the root of the zip. You'll also want to rename the zip files in the following format: OpenJDK8U-jre_{arch}_{os-name}_hotspot_{version}.zip. You'll need to update the version in src/common/extras/index.js if it changes.

Corda requires PostgreSQL 9.6. We "release" 4 versions of PostgreSQL 9.6: Linux x64, Mac x64, Windows x32, and Windows x64. These are downloaded from https://www.enterprisedb.com/downloads/postgres-postgresql-downloads.To redistribute these files you will need to unpack/unzip them, then zip them up again (make sure you are on Linux for the Linux release, as it needs its file permissions properly embedded within the zip). It is very important that you ensure that all files are stored at the root of the zip. You'll also want to rename the zip files in the following format: postgresql-{version}-2-{os-name}-{arch}-binaries.zip. You'll need to update the version in src/common/extras/index.js if it changes.

By Truffle

Ganache is part of the Truffle suite of tools. Find out more!

Download Details:

Author: Trufflesuite
Source Code: https://github.com/trufflesuite/ganache-ui 

#javascript #electron #developement #ethereum 

Ganache-ui: Personal Blockchain for Ethereum Development
margaret mason

margaret mason

1648028158

Hire An Open Cart Developer in 2022

Lately, #opencart has developed as perhaps the best apparatus for site advancement.

Whenever you employ our OpenCart designers to chip away at your undertaking, you can anticipate that we should do all that is expected to satisfy your interest.

Are you looking for an OpenCart #developer at an affordable cost? Then you are at the right place

Hire OpenCart Developer from Data EximIT to construct your excellent dream store and see your deals expanding!"

Our gifted and committed OpenCart developers give custom OpenCart arrangements in light of your business needs.

Our developers are talented and knowledgeable in OpenCart Development, we really do give first class OpenCart #developement Services.

We have a group of master OpenCart engineers in India who are knowledgeable with the most recent instruments and innovations.

We have a devoted Opencart developer who will comprehend your prerequisites and work likewise.

Why Contact us For Hire OpenCart Developers?

  •   Fast turnaround time
  •   Best Infrastructure
  •   Highly qualified technical team
  •   Flexible working hours
  •   Domain knowledge expertise
  •   Dedicated developers for your project
  •   Affordable cost

Our refined and master designers give beginning to end administrations to little, medium and tremendous associations.

Hire committed engineers from us and get progressed arrangements created from us affordable for you

Kindly drop an inquiry and our team will get back to you in 24 hours so that you can select and hire your best OpenCart developer!

Hire An Open Cart Developer in 2022

Ashok Kumar

1609142274

Top 9 Technologies used to develop a mobile applications

Top 9 Technologies Used to Develop Mobile Applications
The demand for Mobile Apps has grown multifold in recent years. Statista study reveals that almost 204 billion apps were downloaded in 2019 (iOS App Store, Google Play, and third-party Android stores combined) and it is expected to reach 352.9 billion by 2021.Every company under the sun that is into selling, assisting, providing service, or offering information is into developing mobile or web apps.
Along with how the app will function, it is also essential to pick the right technologies that serve the business needs and the programming languages for the specific platforms. The four major programming languages that are largely used for mobile app development are:

  1. Swift
    While developing an app for Apple products, Swift language is commonly used. Its stunning features require minimal code that is easy to maintain. Therefore, it is a very popular programming language.

  2. C++
    The ability of this programming language to create promising apps with great simplicity and effectivity makes it a versatile tool that can be used for multiple platforms.

  3. Java
    Java is the official programming language for Andriod. It comes with multiple open-source libraries for developers to choose from. Besides this, it is easy to handle and offers great flexibility.

  4. HTML
    HTML is an ideal programming language when it comes to developing a web application for mobile.
    Device diversity ruling the mobile landscape has led to the rise of several tools. Below we have grossed the top 9 technologies that are used to develop Mobile Applications:

  5. React Native:
    React Native combines the best parts of native development with React, a best-in-class JavaScript library for building user interfaces. One can create platform-specific versions using a single codebase that can be shared across platforms. It also provides a core set of platform-agnostic native components like View, Text, and Image that map directly to the platform’s native UI building blocks.

  6. Flutter
    Flutter is Google’s UI toolkit that builds beautiful, natively compiled applications for mobile, web, and desktop using a single codebase.When one is looking for cross-platform development, flutter can offer great options. Also, it comes with widgets that enable different sorts of features such as scrolling, navigation, fonts, icons, etc.

  7. Xamarin
    Xamarin is an app platform to build Native Android, iOS apps, tvOS, watchOS, macOS, and Windows apps with .Net and C#. Even if the developer is designing a uniform UI across platforms or building a native user interface, the apps will behave the way users expect. Moreover, it is part of the vibrant .NET ecosystem, used by millions of developers worldwide.

  8. Ionic
    Ionic is a free and open-source mobile UI kit, that consists of a library of mobile-optimized UI components, gestures, and tools for building fast highly interactive apps and helps in delivering consistent experiences across all channels, with a single codebase.

  9. PhoneGap:
    The PhoneGap code was contributed to the Apache Software Foundation (ASF) under the name Apache Cordova. It is an open-source distribution of Cordova and consists of various tools like Ionic, Monaca, Onsen UI, App Builder, etc. that augment Cordova. It has a framework that facilitates combining native and hybrid code snippets and has a set of device APIs that allow a mobile app developer to access native device function.

  10. Appcelerator
    This is another exciting tool that can create great, native mobile apps, all from a single JavaScript codebase. Most importantly it has high- quality tools, thatcan be used to build apps for any device or operating systems like Hyperloop, App Designer, and IDE & Titanium SDK.

  11. Corona
    Corona is a free, cross-platform framework that is ideal for creating games and apps for mobile devices and desktop systems. It uses powerful and easy-to-learn Lua scripting language, over 1000 built-in APIs, a vast selection of plugins, and Corona Native extensions (C/C++/Obj-C/Java), to build astonishing apps. One can publish to all major platforms from a single code base using Corona.

  12. RhoMobile
    RhoMobile is an open-source community and built around modern coding languages like JavaScript, HTML5, and CSS3. It consists of libraries of advanced widgets and effects that are easy for developers to integrate. Also, RhoMobile applications come with a RhoConnect client for easy integration with application to provide data synchronization.

  13. Mobincube
    Mobincube allows creating and publishing feature-rich mobile apps. It has an interactive visual interface, which allows mobile app development companies to easily design the App. It offers an absolute level of customization and flexibility so that one can decide how the App will look like.
    Summing up
    With a wide range of tools available in the market, it becomes very difficult to choose the most appropriate one. Expert advice can be of great help to identify which tool can accommodate the app development objectives and build an engaging app. Striking a balance between the available options and user needs is crucial here.

#mobile #app #developement

Top 9 Technologies used to develop a mobile applications
Seamus  Quitzon

Seamus Quitzon

1602842940

How to Become a Faster Developer?

There are many things you can do to increase your velocity when you are developing. for example, mastering your knowledge on the technology you are using or mastering your domain on OOP, learn how to typing faster, or maybe use a last-generation supercomputer. All these things need time or money.

For this post, I want to share some tips (small things) you can implement right now to be able to work faster as a developer. I learn in many years as a software developer, that there are several things no related to how to code, how to design the application, that you can do to work faster. The idea is to reduce the non-productive time inside your development process.

Reduce the Compilation Time (Using Java and Maven)

There are some ways to help you to reduce the compilation time.

  • To developer using Maven to compile Java applications, you can skip the javadoc generation when compiling on development environments (Add this -Dmaven.javadoc.skip=true to you maven command).
  • Increase the number of threads to compile used by maven, (Add this to you maven command -T 2C ) will use 2 thread per available CPU core.
  • Avoid using maven clean command. Maven build your project incrementally. then if a module of your project has no change maven does not need to compile it until it has changed. When you run the clean command maven delete all the artifacts generated before, and has to compile all again.
  • In a multi-module project. you can compile only the module has changed (An option to do that add this to your maven command -pl $moduleName , you can get the same behavior using profiles).
  • Every time you compile the application maven connect to internet to download the dependencies, this is a good feature the maven, the problem is the connection happens every compilation, even if you have all the dependencies in your local repository. To avoid that you can compile on offline mode. (Add -o / --offline to your maven command).

Now try these tips on your compilation process, you will see a faster compilation. this is going to help you to work faster. because you reduce your waiting time.

#java #web dev #maven #developement

How to Become a Faster Developer?