Comparing Python and SQL for Building Data Pipelines🚀🚀🚀

Comparing Python and SQL for Building Data Pipelines🚀🚀🚀

Comparing Python and SQL for Building Data Pipelines - Breaking into the workforce as a web developer, my first interaction with databases and SQL was using an Object Relational Model (ORM). I was using the Django query sets API and had an excellent experience using the interface...

Breaking into the workforce as a web developer, my first interaction with databases and SQL was using an Object Relational Model (ORM). I was using the Django query sets API and had an excellent experience using the interface...

Thereon-after, I changed to a data engineering role and became much more involved in leveraging datasets to build AI. It became my responsibility to take the data from the user application and turn it into something usable by Data Scientists, a process commonly known as ETL.

The data on our production system was very messy and required a lot of transformations before anyone was going to be able to build AI on top of it. There were JSON columns that had different schemas per row, columns contained mixed data types and some rows had erroneous values (people saying that they were born before 1850 or in the future). As I set out on cleaning, aggregating and engineering features for the data, I tried to decide which language would be best for the task. Having used python all day every day before this, I knew that it could do the job. However, what I learned through this experience was that just because python could do the job doesn’t mean it should.

The first time I misjudged SQL is when I assumed that SQL couldn’t do complicated transformations

We are working with a time-series dataset where we wanted to track particular users over time. Privacy laws prevent us from knowing the specific dates of the user visits, so we decided that we would normalize the date of the record to the users first visit (ie 5 days after their first visit etc.). For our analysis, it was important to know the time since the last visit as well as the time since their first visit. A had two sample datasets, one with approximately 7.5 million rows measuring 6.5 GBs, and the other with 550 000 rows measuring 900 MB.

Using the python and SQL code seen below, I used the smaller dataset to first test the transformations. Python and SQL completed the task in 591 and 40.9 seconds respectively. This means that SQL was able to provide a speed-up of roughly 14.5X!

# PYTHON
# connect to db using wrapper around psycopg2
db = DatabaseConnection(db='db', user='username', password='password')
# grab data from db and load into memory
df = db.run_query("SELECT * FROM cleaned_table;")
df = pd.DataFrame(df, columns=['user_id', 'series_id', 'timestamp'])
# calculate time since first visit
df = df.sort_values(['user_id', 'timestamp'], ascending=True).assign(time_since_first=df.groupby('user_id').timestamp.apply(lambda x: x - x.min()))
# calculate time since last visit
df = df.assign(time_since_last=df.sort_values(['timestamp'], ascending=True).groupby('user_id')['timestamp'].transform(pd.Series.diff))
# save df to compressed csv
df.to_csv('transform_time_test.gz', compression='gzip')

-- SQL equivalent
-- increase the working memory (be careful with this)
set work_mem='600MB';
-- create a dual index on the partition
CREATE INDEX IF NOT EXISTS user_time_index ON table(user_id, timestamp);
-- calculate time since last visit and time since first visit in one pass 
SELECT *, AGE(timestamp, LAG(timestamp, 1, timestamp) OVER w) AS time_since_last, AGE(timestamp, FIRST_VALUE(timestamp) OVER w) AS time_since_first FROM table WINDOW w AS (PARTITION BY user_id ORDER BY timestamp);

This SQL transformation was not only faster but the code is also more readable and thus easier to maintain. Here, I used the lag and first_value functions to find specific records in the users history (called a partition). I then used the age function to determine the time difference between visits.

What’s even more interesting is that when these transformation scripts were applied to the 6.5 GB dataset, python completely failed. Out of 3 attempts, python crashed 2 times and my computer completely froze the 3rd time… while SQL took 226 seconds.

More info:

https://www.postgresql.org/docs/9.5/functions-window.html

http://www.postgresqltutorial.com/postgresql-window-function/

The second time I misjudged SQL is when I thought that it couldn’t flatten irregular json

Another game changer for me was realizing that Postgres worked with JSON quite well. I initially thought that it would be impossible to flatten or parse json in postgres…I can’t believe that I was so dumb. If you want to relationize json and its schema is consistent between rows, then your best bet is probably to use Postgres built in ability to parse json.

-- SQL (the -> syntax is how you parse json)
SELECT user_json->'info'->>'name' as user_name FROM user_table;

On the other hand, half the json in my sample dataset isn’t valid json and thus is stored as text. In which case I was left with a choice, I could either recode the data to make it valid OR I could just drop the rows that didn’t follow the rules. To do this, I created a new SQL function called is_json that I could then use to qualify valid json in a WHERE clause.

-- SQL
create or replace function is_json(text)
returns boolean language plpgsql immutable as $
begin
    perform $1::json;
    return true;
exception
    when invalid_text_representation then 
        return false;
end $;
SELECT user_json->'info'->>'name' as user_name FROM user_table WHERE is_json(user_json);

Unfortunately, I found that the user_json had a different schema depending on what app version the user was on. Although this makes sense from an application development point of view, it makes it really expensive to conditionally parse every possibility per row. Was I destined to enter python again… not a chance! I found another function on stack-overflow written by a postgres god named klin.

-- SQL
create or replace function create_jsonb_flat_view
    (table_name text, regular_columns text, json_column text)
    returns text language plpgsql as $
declare
    cols text;
begin
    execute format ($ex$
        select string_agg(format('%2$s->>%%1$L "%%1$s"', key), ', ')
        from (
            select distinct key
            from %1$s, jsonb_each(%2$s)
            order by 1
            ) s;
        $ex$, table_name, json_column)
    into cols;
    execute format($ex$
        drop view if exists %1$s_view;
        create view %1$s_view as 
        select %2$s, %3$s from %1$s
        $ex$, table_name, regular_columns, cols);
    return cols;
end $;

This function was able to successfully flatten my json and solve my worst nightmare quite easily.

Final Comments

There is an idiom that declares Python as the second best language to do almost anything. I believe this to be true and in some instances have found the performance difference between Python and the ‘best’ language to be negligible. In this case however, python was unable to compete with SQL. These realizations along with readings I’ve been doing has completely changed my approach to ETL. I now work under the paradigm of “Do not move data to code, move code to your data”. Python moves your data to the code while SQL acts on it in place. What’s more is that I know that I’ve only scratched the surface of sql and postgres abilities. I’m looking forward to more awesome functionality, and the possibility of getting speed ups from using an analytical warehouse.

=======================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Complete Python Bootcamp: Go from zero to hero in Python 3

☞ Python and Django Full Stack Web Developer Bootcamp

☞ Python for Time Series Data Analysis

☞ Python Programming For Beginners From Scratch

☞ Python Network Programming | Network Apps & Hacking Tools

☞ Intro To SQLite Databases for Python Programming

☞ Ethical Hacking With Python, JavaScript and Kali Linux

☞ Beginner’s guide on Python: Learn python from scratch! (New)

☞ Python for Beginners: Complete Python Programming

☞ Django 2.1 & Python | The Ultimate Web Development Bootcamp

☞ Python eCommerce | Build a Django eCommerce Web Application

☞ Python Django Dev To Deployment

Connect to Microsoft SQL Server database on MacOS using Python

Connect to Microsoft SQL Server database on MacOS using Python

Connect to your MS SQL using python. The first thing you need is to install Homebrew. You need the copy the content in the square brackets which in my case is “ODBC Driver 13 for SQL Server”. Replace “ODBC Driver 13 for SQL Server” with the content you copied in the square brackets.

There are always situations where I want to automate small tasks. I like using Python for these kind of things, you can quickly get something working without much hassle. I needed to perform some database changes in a Microsoft SQL Server database and wanted to connect to it using Python. On Windows this is usually pretty straight forward. But I use macOS as my main operating system and I had some hurdles along the way so here is a quick how to.

Preparing

If Homebrew isn't installed yet let's do that first. It's an excellent package manager for macOS. Paste the following command in the terminal to install it:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once finished run the following commands to install the Microsoft ODBC Driver 13.1 for SQL Server. This driver will be used to connect to the MS SQL database.

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew tap microsoft/mssql-release https://github.com/Microsoft/homebrew-mssql-release
brew update
brew install [email protected] [email protected]
Python package: pyodbc or pypyodbc

With the driver installed, we'll be using a Python package that uses the driver to connect to the database. There are two frequently used packages to connect in Python: pyodbc and pypyodbcpypyodbc is a pure Python re-implementation of pyodbc. The main thing I took a way was that pypyodbcis easier to set up on different platforms. I could not get pyodbc working, it simply wouldn't connect to the database.

Installing pypyodbc can be done using pip, the python package installer.

pip install pypyodbc
Code

Now that the necessary components have been installed, let's open our favorite code editor and write code. The first thing that needs to be done is importing pypyodbc. The script starts with this:

import pypyodbc

Then we need a connection to the database:

sqlConnection = pypyodbc.connect(
                "Driver={ODBC Driver 13 for SQL Server};"
        "Server=<server IP address>;"
        "Database=<database>;"
        "uid=<username>;pwd=<password>");

Note that you have to replace four values in this code: server IP addressdatabase , username and password. The value for the driver is a hard coded value which indicates what driver to use to connect to the database, this value points to the driver that was installed earlier.

Now all what rests is use the connection and run a query.

cursor = sqlConnection.cursor()
cursor.execute("select * from Values")

The cursor now contains the result of our query, the property cursor.rowcount returns the number of rows the query returned. It's now possible to loop through the rows and access the different columns:

for row in cursor:
    print(cursor)
    # first column
    firstColumn = row[0]
    # ...

When we're done, we need to clean up the cursor and database connection by closing it.

cursor.close()
sqlConnection.close()

And that's it, save the file and use the python <filename>.py or python3 <filename.py> command, this depends on your configuration, to run. Here is the entire script:

import pypyodbc

sqlConnection = pypyodbc.connect(
"Driver={ODBC Driver 13 for SQL Server};"
"Server=<server IP address>;"
"Database=<database>;"
"uid=<username>;pwd=<password>");

cursor = sqlConnection.cursor()
cursor.execute("select * from Values")

for row in cursor:
print(cursor)
# first column
firstColumn = row[0]
# ...

cursor.close()
sqlConnection.close()

The with syntax can also be used to automatically close the cursor and the connection, this is another way of writing the same script:

import pypyodbc

with pypyodbc.connect(
"Driver={ODBC Driver 13 for SQL Server};"
"Server=<server IP address>;"
"Database=<database>;"
"uid=<username>;pwd=<password>") as sqlConnection:

with sqlConnection.cursor() as cursor:

    cursor.execute("select * from Values")

    for row in cursor:
        print(cursor)
        # first column
        firstColumn = row[0]
        # ...

If you're looking for some more reading on the topic:

Thanks for reading. If you liked this post, share it with all of your programming buddies!

Further reading

☞ Python for Time Series Data Analysis

☞ Python Programming For Beginners From Scratch

☞ Python Network Programming | Network Apps & Hacking Tools

☞ Intro To SQLite Databases for Python Programming

☞ Ethical Hacking With Python, JavaScript and Kali Linux

☞ Beginner’s guide on Python: Learn python from scratch! (New)

☞ Python for Beginners: Complete Python Programming

How to Prevent SQL Injection Attacks With Python

How to Prevent SQL Injection Attacks With Python

Among all injection types, SQL injection is one of the most common attack vectors, and arguably the most dangerous. As Python is one of the most popular programming languages in the world, knowing how to protect against Python SQL injection is critical.

Among all injection types, SQL injection is one of the most common attack vectors, and arguably the most dangerous. As Python is one of the most popular programming languages in the world, knowing how to protect against Python SQL injection is critical.

Every few years, the Open Web Application Security Project (OWASP) ranks the most critical web application security risks. Since the first report, injection risks have always been on top. Among all injection types, SQL injection is one of the most common attack vectors, and arguably the most dangerous. As Python is one of the most popular programming languages in the world, knowing how to protect against Python SQL injection is critical.

In this tutorial, you’re going to learn:

  • What Python SQL injection is and how to prevent it
  • How to compose queries with both literals and identifiers as parameters
  • How to safely execute queries in a database

Table of Contents

This tutorial is suited for users of all database engines. The examples here use PostgreSQL, but the results can be reproduced in other database management systems (such as SQLite, MySQL, Microsoft SQL Server, Oracle, and so on).

Understanding Python SQL Injection

SQL Injection attacks are such a common security vulnerability that the legendary xkcd webcomic devoted a comic to it:

"Exploits of a Mom" (Image: xkcd)

Generating and executing SQL queries is a common task. However, companies around the world often make horrible mistakes when it comes to composing SQL statements. While the ORM layer usually composes SQL queries, sometimes you have to write your own.

When you use Python to execute these queries directly into a database, there’s a chance you could make mistakes that might compromise your system. In this tutorial, you’ll learn how to successfully implement functions that compose dynamic SQL queries without putting your system at risk for Python SQL injection.

Setting Up a Database

To get started, you’re going to set up a fresh PostgreSQL database and populate it with data. Throughout the tutorial, you’ll use this database to witness firsthand how Python SQL injection works.

Creating a Database

First, open your shell and create a new PostgreSQL database owned by the user postgres:

$ createdb -O postgres psycopgtest

Here you used the command line option -O to set the owner of the database to the user postgres. You also specified the name of the database, which is psycopgtest.

Note: postgres is a special user, which you would normally reserve for administrative tasks, but for this tutorial, it’s fine to use postgres. In a real system, however, you should create a separate user to be the owner of the database.

Your new database is ready to go! You can connect to it using psql:

$ psql -U postgres -d psycopgtest
psql (11.2, server 10.5)
Type "help" for help.

You’re now connected to the database psycopgtest as the user postgres. This user is also the database owner, so you’ll have read permissions on every table in the database.

Creating a Table With Data

Next, you need to create a table with some user information and add data to it:

psycopgtest=# CREATE TABLE users (
    username varchar(30),
    admin boolean
);
CREATE TABLE

psycopgtest=# INSERT INTO users
    (username, admin)
VALUES
    ('ran', true),
    ('haki', false);
INSERT 0 2

psycopgtest=# SELECT * FROM users;
 username | admin
----------+-------
 ran      | t
 haki     | f
(2 rows)

The table has two columns: username and admin. The admin column indicates whether or not a user has administrative privileges. Your goal is to target the admin field and try to abuse it.

Setting Up a Python Virtual Environment

Now that you have a database, it’s time to set up your Python environment.

Create your virtual environment in a new directory:

~/src $ mkdir psycopgtest
~/src $ cd psycopgtest
~/src/psycopgtest $ python3 -m venv venv

After you run this command, a new directory called venv will be created. This directory will store all the packages you install inside the virtual environment.

Connecting to the Database

To connect to a database in Python, you need a database adapter. Most database adapters follow version 2.0 of the Python Database API Specification PEP 249. Every major database engine has a leading adapter:

To connect to a PostgreSQL database, you’ll need to install Psycopg, which is the most popular adapter for PostgreSQL in Python. Django ORM uses it by default, and it’s also supported by SQLAlchemy.

In your terminal, activate the virtual environment and use pip to install psycopg:

~/src/psycopgtest $ source venv/bin/activate
~/src/psycopgtest $ python -m pip install psycopg2>=2.8.0
Collecting psycopg2
  Using cached https://....
  psycopg2-2.8.2.tar.gz
Installing collected packages: psycopg2
  Running setup.py install for psycopg2 ... done
Successfully installed psycopg2-2.8.2

Now you’re ready to create a connection to your database. Here’s the start of your Python script:

import psycopg2

connection = psycopg2.connect(
    host="localhost",
    database="psycopgtest",
    user="postgres",
    password=None,
)
connection.set_session(autocommit=True)

You used psycopg2.connect() to create the connection. This function accepts the following arguments:

  • host is the IP address or the DNS of the server where your database is located. In this case, the host is your local machine, or localhost.

  • database is the name of the database to connect to. You want to connect to the database you created earlier, psycopgtest.

  • user is a user with permissions for the database. In this case, you want to connect to the database as the owner, so you pass the user postgres.

  • password is the password for whoever you specified in user. In most development environments, users can connect to the local database without a password.

After setting up the connection, you configured the session with autocommit=True. Activating autocommit means you won’t have to manually manage transactions by issuing a commit or rollback. This is the default behavior in most ORMs. You use this behavior here as well so that you can focus on composing SQL queries instead of managing transactions.

Note: Django users can get the instance of the connection used by the ORM from django.db.connection:

from django.db import connection

Executing a Query

Now that you have a connection to the database, you’re ready to execute a query:

>>> with connection.cursor() as cursor:
...     cursor.execute('SELECT COUNT(*) FROM users')
...     result = cursor.fetchone()
... print(result)
(2,)

You used the connection object to create a cursor. Just like a file in Python, cursor is implemented as a context manager. When you create the context, a cursor is opened for you to use to send commands to the database. When the context exits, the cursor closes and you can no longer use it.

While inside the context, you used cursor to execute a query and fetch the results. In this case, you issued a query to count the rows in the users table. To fetch the result from the query, you executed cursor.fetchone() and received a tuple. Since the query can only return one result, you used fetchone(). If the query were to return more than one result, then you’d need to either iterate over cursor or use one of the other fetch* methods.

Using Query Parameters in SQL

In the previous section, you created a database, established a connection to it, and executed a query. The query you used was static. In other words, it had no parameters. Now you’ll start to use parameters in your queries.

First, you’re going to implement a function that checks whether or not a user is an admin. is_admin() accepts a username and returns that user’s admin status:

# BAD EXAMPLE. DON'T DO THIS!
def is_admin(username: str) -> bool:
    with connection.cursor() as cursor:
        cursor.execute("""
            SELECT
                admin
            FROM
                users
            WHERE
                username = '%s'
        """ % username)
        result = cursor.fetchone()
    admin, = result
    return admin

This function executes a query to fetch the value of the admin column for a given username. You used fetchone() to return a tuple with a single result. Then, you unpacked this tuple into the variable admin. To test your function, check some usernames:

>>> is_admin('haki')
False
>>> is_admin('ran')
True

So far so good. The function returned the expected result for both users. But what about non-existing user? Take a look at this Python traceback:

>>> is_admin('foo')
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 12, in is_admin
TypeError: cannot unpack non-iterable NoneType object

When the user does not exist, a TypeError is raised. This is because .fetchone() returns None when no results are found, and unpacking None raises a TypeError. The only place you can unpack a tuple is where you populate admin from result.

To handle non-existing users, create a special case for when result is None:

# BAD EXAMPLE. DON'T DO THIS!
def is_admin(username: str) -> bool:
    with connection.cursor() as cursor:
        cursor.execute("""
            SELECT
                admin
            FROM
                users
            WHERE
                username = '%s'
        """ % username)
        result = cursor.fetchone()

    if result is None:
        # User does not exist
        return False

    admin, = result
    return admin

Here, you’ve added a special case for handling None. If username does not exist, then the function should return False. Once again, test the function on some users:

>>> is_admin('haki')
False
>>> is_admin('ran')
True
>>> is_admin('foo')
False

Great! The function can now handle non-existing usernames as well.

Remove ads

Exploiting Query Parameters With Python SQL Injection

In the previous example, you used string interpolation to generate a query. Then, you executed the query and sent the resulting string directly to the database. However, there’s something you may have overlooked during this process.

Think back to the username argument you passed to is_admin(). What exactly does this variable represent? You might assume that username is just a string that represents an actual user’s name. As you’re about to see, though, an intruder can easily exploit this kind of oversight and cause major harm by performing Python SQL injection.

Try to check if the following user is an admin or not:

>>> is_admin("'; select true; --")
True

Wait… What just happened?

Let’s take another look at the implementation. Print out the actual query being executed in the database:

>>> print("select admin from users where username = '%s'" % "'; select true; --")
select admin from users where username = ''; select true; --'

The resulting text contains three statements. To understand exactly how Python SQL injection works, you need to inspect each part individually. The first statement is as follows:

select admin from users where username = '';

This is your intended query. The semicolon (;) terminates the query, so the result of this query does not matter. Next up is the second statement:

select true;

This statement was constructed by the intruder. It’s designed to always return True.

Lastly, you see this short bit of code:

--'

This snippet defuses anything that comes after it. The intruder added the comment symbol (--) to turn everything you might have put after the last placeholder into a comment.

When you execute the function with this argument, it will always return True. If, for example, you use this function in your login page, an intruder could log in with the username '; select true; --, and they’ll be granted access.

If you think this is bad, it could get worse! Intruders with knowledge of your table structure can use Python SQL injection to cause permanent damage. For example, the intruder can inject an update statement to alter the information in the database:

>>> is_admin('haki')
False
>>> is_admin("'; update users set admin = 'true' where username = 'haki'; select true; --")
True
>>> is_admin('haki')
True

Let’s break it down again:

';

This snippet terminates the query, just like in the previous injection. The next statement is as follows:

update users set admin = 'true' where username = 'haki';

This section updates admin to true for user haki.

Finally, there’s this code snippet:

select true; --

As in the previous example, this piece returns true and comments out everything that follows it.

Why is this worse? Well, if the intruder manages to execute the function with this input, then user haki will become an admin:

psycopgtest=# select * from users;
 username | admin
----------+-------
 ran      | t
 haki     | t
(2 rows)

The intruder no longer has to use the hack. They can just log in with the username haki. (If the intruder really wanted to cause harm, then they could even issue a DROP DATABASE command.)

Before you forget, restore haki back to its original state:

psycopgtest=# update users set admin = false where username = 'haki';
UPDATE 1

So, why is this happening? Well, what do you know about the username argument? You know it should be a string representing the username, but you don’t actually check or enforce this assertion. This can be dangerous! It’s exactly what attackers are looking for when they try to hack your system.

Crafting Safe Query Parameters

In the previous section, you saw how an intruder can exploit your system and gain admin permissions by using a carefully crafted string. The issue was that you allowed the value passed from the client to be executed directly to the database, without performing any sort of check or validation. SQL injections rely on this type of vulnerability.

Any time user input is used in a database query, there’s a possible vulnerability for SQL injection. The key to preventing Python SQL injection is to make sure the value is being used as the developer intended. In the previous example, you intended for username to be used as a string. In reality, it was used as a raw SQL statement.

To make sure values are used as they’re intended, you need to escape the value. For example, to prevent intruders from injecting raw SQL in the place of a string argument, you can escape quotation marks:

>>> # BAD EXAMPLE. DON'T DO THIS!
>>> username = username.replace("'", "''")

This is just one example. There are a lot of special characters and scenarios to think about when trying to prevent Python SQL injection. Lucky for you, modern database adapters, come with built-in tools for preventing Python SQL injection by using query parameters. These are used instead of plain string interpolation to compose a query with parameters.

Note: Different adapters, databases, and programming languages refer to query parameters by different names. Common names include bind variables, replacement variables, and substitution variables.

Now that you have a better understanding of the vulnerability, you’re ready to rewrite the function using query parameters instead of string interpolation:

def is_admin(username: str) -> bool:

    with connection.cursor() as cursor:

        cursor.execute("""

            SELECT

                admin

            FROM

                users

            WHERE

                username = %(username)s

        """, {

            'username': username

        })

        result = cursor.fetchone()


    if result is None:

        # User does not exist

        return False


    admin, = result

    return admin

Here’s what’s different in this example:

  • In line 9, you used a named parameter username to indicate where the username should go. Notice how the parameter username is no longer surrounded by single quotation marks.

  • In line 11, you passed the value of username as the second argument to cursor.execute(). The connection will use the type and value of username when executing the query in the database.

To test this function, try some valid and invalid values, including the dangerous string from before:

>>> is_admin('haki')
False
>>> is_admin('ran')
True
>>> is_admin('foo')
False
>>> is_admin("'; select true; --")
False

Amazing! The function returned the expected result for all values. What’s more, the dangerous string no longer works. To understand why, you can inspect the query generated by execute():

>>> with connection.cursor() as cursor:
...    cursor.execute("""
...        SELECT
...            admin
...        FROM
...            users
...        WHERE
...            username = %(username)s
...    """, {
...        'username': "'; select true; --"
...    })
...    print(cursor.query.decode('utf-8'))
SELECT
    admin
FROM
    users
WHERE
    username = '''; select true; --'

The connection treated the value of username as a string and escaped any characters that might terminate the string and introduce Python SQL injection.

Passing Safe Query Parameters

Database adapters usually offer several ways to pass query parameters. Named placeholders are usually the best for readability, but some implementations might benefit from using other options.

Let’s take a quick look at some of the right and wrong ways to use query parameters. The following code block shows the types of queries you’ll want to avoid:

# BAD EXAMPLES. DON'T DO THIS!
cursor.execute("SELECT admin FROM users WHERE username = '" + username + '");
cursor.execute("SELECT admin FROM users WHERE username = '%s' % username);
cursor.execute("SELECT admin FROM users WHERE username = '{}'".format(username));
cursor.execute(f"SELECT admin FROM users WHERE username = '{username}'");

Each of these statements passes username from the client directly to the database, without performing any sort of check or validation. This sort of code is ripe for inviting Python SQL injection.

In contrast, these types of queries should be safe for you to execute:

# SAFE EXAMPLES. DO THIS!
cursor.execute("SELECT admin FROM users WHERE username = %s'", (username, ));
cursor.execute("SELECT admin FROM users WHERE username = %(username)s", {'username': username});

In these statements, username is passed as a named parameter. Now, the database will use the specified type and value of username when executing the query, offering protection from Python SQL injection.

Using SQL Composition

So far you’ve used parameters for literals. Literals are values such as numbers, strings, and dates. But what if you have a use case that requires composing a different query—one where the parameter is something else, like a table or column name?

Inspired by the previous example, let’s implement a function that accepts the name of a table and returns the number of rows in that table:

# BAD EXAMPLE. DON'T DO THIS!
def count_rows(table_name: str) -> int:
    with connection.cursor() as cursor:
        cursor.execute("""
            SELECT
                count(*)
            FROM
                %(table_name)s
        """, {
            'table_name': table_name,
        })
        result = cursor.fetchone()

    rowcount, = result
    return rowcount

Try to execute the function on your users table:

Traceback (most recent call last):
  File "", line 1, in 
  File "", line 9, in count_rows
psycopg2.errors.SyntaxError: syntax error at or near "'users'"
LINE 5:                 'users'
                        ^

The command failed to generate the SQL. As you’ve seen already, the database adapter treats the variable as a string or a literal. A table name, however, is not a plain string. This is where SQL composition comes in.

You already know it’s not safe to use string interpolation to compose SQL. Luckily, Psycopg provides a module called psycopg.sql to help you safely compose SQL queries. Let’s rewrite the function using psycopg.sql.SQL():

from psycopg2 import sql

def count_rows(table_name: str) -> int:
    with connection.cursor() as cursor:
        stmt = sql.SQL("""
            SELECT
                count(*)
            FROM
                {table_name}
        """).format(
            table_name = sql.Identifier(table_name),
        )
        cursor.execute(stmt)
        result = cursor.fetchone()

    rowcount, = result
    return rowcount

There are two differences in this implementation. First, you used sql.SQL() to compose the query. Then, you used sql.Identifier() to annotate the argument value table_name. (An identifier is a column or table name.)

Note: Users of the popular package django-debug-toolbar might get an error in the SQL panel for queries composed with psycopg.sql.SQL(). A fix is expected for release in version 2.0.

Now, try executing the function on the users table:

>>> count_rows('users')
2

Great! Next, let’s see what happens when the table does not exist:

>>> count_rows('foo')
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 11, in count_rows
psycopg2.errors.UndefinedTable: relation "foo" does not exist
LINE 5:                 "foo"
                        ^

The function throws the UndefinedTable exception. In the following steps, you’ll use this exception as an indication that your function is safe from a Python SQL injection attack.

Note: The exception UndefinedTable was added in psycopg2 version 2.8. If you’re working with an earlier version of Psycopg, then you’ll get a different exception.

To put it all together, add an option to count rows in the table up to a certain limit. This feature might be useful for very large tables. To implement this, add a LIMIT clause to the query, along with query parameters for the limit’s value:

from psycopg2 import sql

def count_rows(table_name: str, limit: int) -> int:
    with connection.cursor() as cursor:
        stmt = sql.SQL("""
            SELECT
                COUNT(*)
            FROM (
                SELECT
                    1
                FROM
                    {table_name}
                LIMIT
                    {limit}
            ) AS limit_query
        """).format(
            table_name = sql.Identifier(table_name),
            limit = sql.Literal(limit),
        )
        cursor.execute(stmt)
        result = cursor.fetchone()

    rowcount, = result
    return rowcount

In this code block, you annotated limit using sql.Literal(). As in the previous example, psycopg will bind all query parameters as literals when using the simple approach. However, when using sql.SQL(), you need to explicitly annotate each parameter using either sql.Identifier() or sql.Literal().

Note: Unfortunately, the Python API specification does not address the binding of identifiers, only literals. Psycopg is the only popular adapter that added the ability to safely compose SQL with both literals and identifiers. This fact makes it even more important to pay close attention when binding identifiers.

Execute the function to make sure that it works:

>>> count_rows('users', 1)
1
>>> count_rows('users', 10)
2

Now that you see the function is working, make sure it’s also safe:

>>> count_rows("(select 1) as foo; update users set admin = true where name = 'haki'; --", 1)
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 18, in count_rows
psycopg2.errors.UndefinedTable: relation "(select 1) as foo; update users set admin = true where name = '" does not exist
LINE 8:                     "(select 1) as foo; update users set adm...
                            ^

This traceback shows that psycopg escaped the value, and the database treated it as a table name. Since a table with this name doesn’t exist, an UndefinedTable exception was raised and you were not hacked!

Remove ads

Conclusion

You’ve successfully implemented a function that composes dynamic SQL without putting your system at risk for Python SQL injection! You’ve used both literals and identifiers in your query without compromising security.

You’ve learned:

  • What Python SQL injection is and how it can be exploited
  • How to prevent Python SQL injection using query parameters
  • How to safely compose SQL statements that use literals and identifiers as parameters

You’re now able to create programs that can withstand attacks from the outside. Go forth and thwart the hackers!

How To Manage an SQL Database

How To Manage an SQL Database

SQL databases come installed with all the commands you need to add, modify, delete, and query your data. This cheat sheet-style guide provides a quick reference to some of the most commonly-used SQL commands.

SQL databases come installed with all the commands you need to add, modify, delete, and query your data. This cheat sheet-style guide provides a quick reference to some of the most commonly-used SQL commands.

An SQL Cheat Sheet

Introduction

SQL databases come installed with all the commands you need to add, modify, delete, and query your data. This cheat sheet-style guide provides a quick reference to some of the most commonly-used SQL commands.

How to Use This Guide:

  • This guide is in cheat sheet format with self-contained command-line snippets
  • Jump to any section that is relevant to the task you are trying to complete
  • When you see <span class="highlight">highlighted text</span> in this guide’s commands, keep in mind that this text should refer to the columns, tables, and data in your own database.
  • Throughout this guide, the example data values given are all wrapped in apostrophes ('). In SQL, it is necessary to wrap any data values that consist of strings in apostrophes. This isn’t required for numeric data, but it also won’t cause any issues if you do include apostrophes.

Please note that, while SQL is recognized as a standard, most SQL database programs have their own proprietary extensions. This guide uses MySQL as the example relational database management system (RDBMS), but the commands given will work with other relational database programs, including PostgreSQL, MariaDB, and SQLite. Where there are significant differences between RDBMSs, we have included the alternative commands.

Opening up the Database Prompt (using Socket/Trust Authentication)

By default on Ubuntu 18.04, the root MySQL user can authenticate without a password using the following command:

sudo mysql


To open up a PostgreSQL prompt, use the following command. This example will log you in as the postgres user, which is the included superuser role, but you can replace that with any already-created role:

sudo -u postgres psql


Opening up the Database Prompt (using Password Authentication)

If your root MySQL user is set to authenticate with a password, you can do so with the following command:

mysql -u root -p


If you’ve already set up a non-root user account for your database, you can also use this method to log in as that user:

mysql -u user -p


The above command will prompt you for your password after you run it. If you’d like to supply your password as part of the command, immediately follow the -p option with your password, with no space between them:

mysql -u root -ppassword


Creating a Database

The following command creates a database with default settings.

CREATE DATABASE database_name;


If you want your database to use a character set and collation different than the defaults, you can specify those using this syntax:

CREATE DATABASE database_name CHARACTER SET character_set COLLATE collation;


Listing Databases

To see what databases exist in your MySQL or MariaDB installation, run the following command:

SHOW DATABASES;


In PostgreSQL, you can see what databases have been created with the following command:

\list


Deleting a Database

To delete a database, including any tables and data held within it, run a command that follows this structure:

DROP DATABASE IF EXISTS database;


Creating a User

To create a user profile for your database without specifying any privileges for it, run the following command:

CREATE USER username IDENTIFIED BY 'password';


PostgreSQL uses a similar, but slightly different, syntax:

CREATE USER user WITH PASSWORD 'password';


If you want to create a new user and grant them privileges in one command, you can do so by issuing a GRANT statement. The following command creates a new user and grants them full privileges to every database and table in the RDBMS:

GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' IDENTIFIED BY 'password';


Note the PRIVILEGES keyword in this previous GRANT statement. In most RDBMSs, this keyword is optional, and this statement can be equivalently written as:

GRANT ALL ON *.* TO 'username'@'localhost' IDENTIFIED BY 'password';


Be aware, though, that the PRIVILEGES keyword is required for granting privileges like this when Strict SQL mode is turned on.

Deleting a User

Use the following syntax to delete a database user profile:

DROP USER IF EXISTS username;


Note that this command will not by default delete any tables created by the deleted user, and attempts to access such tables may result in errors.

Selecting a Database

Before you can create a table, you first have to tell the RDBMS the database in which you’d like to create it. In MySQL and MariaDB, do so with the following syntax:

USE database;


In PostgreSQL, you must use the following command to select your desired database:

\connect database


Creating a Table

The following command structure creates a new table with the name <span class="highlight">table</span>, and includes two columns, each with their own specific data type:

CREATE TABLE table ( column_1 column_1_data_type, column_2 column_2_data_taype );


Deleting a Table

To delete a table entirely, including all its data, run the following:

DROP TABLE IF EXISTS table


Inserting Data into a Table

Use the following syntax to populate a table with one row of data:

INSERT INTO table ( column_A, column_B, column_C ) VALUES ( 'data_A', 'data_B', 'data_C' );


You can also populate a table with multiple rows of data using a single command, like this:

INSERT INTO table ( column_A, column_B, column_C ) VALUES ( 'data_1A', 'data_1B', 'data_1C' ),  ( 'data_2A', 'data_2B', 'data_2C' ), ( 'data_3A', 'data_3B', 'data_3C' );


Deleting Data from a Table

To delete a row of data from a table, use the following command structure. Note that <span class="highlight">value</span> should be the value held in the specified <span class="highlight">column</span> in the row that you want to delete:

DELETE FROM table WHERE column='value';


Note: If you do not include a WHERE clause in a DELETE statement, as in the following example, it will delete all the data held in a table, but not the columns or the table itself:

DELETE FROM table;


Changing Data in a Table

Use the following syntax to update the data held in a given row. Note that the WHERE clause at the end of the command tells SQL which row to update. <span class="highlight">value</span> is the value held in <span class="highlight">column_A</span> that aligns with the row you want to change.

Note: If you fail to include a WHERE clause in an UPDATE statement, the command will replace the data held in every row of the table.

UPDATE table SET column_1 = value_1, column_2 = value_2 WHERE column_A=value;


Inserting a Column

The following command syntax will add a new column to a table:

ALTER TABLE table ADD COLUMN column data_type;


Deleting a Column

A command following this structure will delete a column from a table:

ALTER TABLE table DROP COLUMN column;


Performing Basic Queries

To view all the data from a single column in a table, use the following syntax:

SELECT column FROM table;


To query multiple columns from the same table, separate the column names with a comma:

SELECT column_1, column_2 FROM table;


You can also query every column in a table by replacing the names of the columns with an asterisk (*). In SQL, asterisks act as placeholders to represent “all”:

SELECT * FROM table;


Using WHERE Clauses

You can narrow down the results of a query by appending the SELECT statement with a WHERE clause, like this:

SELECT column FROM table WHERE conditions_that_apply;


For example, you can query all the data from a single row with a syntax like the following. Note that <span class="highlight">value</span> should be a value held in both the specified <span class="highlight">column</span> and the row you want to query:

SELECT * FROM table WHERE column = value;


Working with Comparison Operators

A comparison operator in a WHERE clause defines how the specified column should be compared against the value. Here are some common SQL comparison operators:

Operator What it does = tests for equality != tests for inequality < tests for less-than > tests for greater-than <= tests for less-than or equal-to >= tests for greater-than or equal-to BETWEEN tests whether a value lies within a given range IN tests whether a row’s value is contained in a set of specified values EXISTS tests whether rows exist, given the specified conditions LIKE tests whether a value matches a specified string IS NULL tests for NULL values IS NOT NULL tests for all values other than NULL Working with Wildcards

SQL allows the use of wildcard characters. These are useful if you’re trying to find a specific entry in a table, but aren’t sure of what that entry is exactly.

Asterisks (*) are placeholders that represent “all,” this will query every column in a table:

SELECT * FROM table;


Percentage signs (%) represent zero or more unknown characters.

SELECT * FROM table WHERE column LIKE val%;


Underscores (_) are used to represent a single unknown character:

SELECT * FROM table WHERE column LIKE v_lue;


Counting Entries in a Column

The COUNT function is used to find the number of entries in a given column. The following syntax will return the total number of values held in <span class="highlight">column</span>:

SELECT COUNT(column) FROM table;


You can narrow down the results of a COUNT function by appending a WHERE clause, like this:

SELECT COUNT(column) FROM table WHERE column=value;


Finding the Average Value in a Column

The AVG function is used to find the average (in this case, the mean) amongst values held in a specific column. Note that the AVG function will only work with columns holding numeric values; when used on a column holding string values, it may return an error or 0:

SELECT AVG(column) FROM table;


Finding the Sum of Values in a Column

The SUM function is used to find the sum total of all the numeric values held in a column:

SELECT SUM(column) FROM table;


As with the AVG function, if you run the SUM function on a column holding string values it may return an error or just 0, depending on your RDBMS.

Finding the Largest Value in a Column

To find the largest numeric value in a column or the last value alphabetically, use the MAX function:

SELECT MAX(column) FROM table;


Finding the Smallest Value in a Column

To find the smallest numeric value in a column or the first value alphabetically, use the MIN function:

SELECT MIN(column) FROM table;


Sorting Results with ORDER BY Clauses

An ORDER BY clause is used to sort query results. The following query syntax returns the values from <span class="highlight">column_1</span> and <span class="highlight">column_2</span> and sorts the results by the values held in <span class="highlight">column_1</span> in ascending order or, for string values, in alphabetical order:

SELECT column_1, column_2 FROM table ORDER BY column_1;


To perform the same action, but order the results in descending or reverse alphabetical order, append the query with DESC:

SELECT column_1, column_2 FROM table ORDER BY column_1 DESC;


Sorting Results with GROUP BY Clauses

The GROUP BY clause is similar to the ORDER BY clause, but it is used to sort the results of a query that includes an aggregate function such as COUNT, MAX, MIN, or SUM. On their own, the aggregate functions described in the previous section will only return a single value. However, you can view the results of an aggregate function performed on every matching value in a column by including a GROUP BY clause.

The following syntax will count the number of matching values in <span class="highlight">column_2</span> and group them in ascending or alphabetical order:

SELECT COUNT(column_1), column_2 FROM table GROUP BY column_2;


To perform the same action, but group the results in descending or reverse alphabetical order, append the query with DESC:

SELECT COUNT(column_1), column_2 FROM table GROUP BY column_2 DESC;


Querying Multiple Tables with JOIN Clauses

JOIN clauses are used to create result sets that combine rows from two or more tables. A JOIN clause will only work if the two tables each have a column with an identical name and data type, as in this example:

SELECT table_1.column_1, table_2.column_2 FROM table_1 JOIN table_2 ON table_1.common_column=table_2.common_column;


This is an example of an INNER JOIN clause. An INNER JOIN will return all the records that have matching values in both tables, but won’t show any records that don’t have matching values.

It’s possible to return all the records from one of two tables, including values that do not have a corresponding match in the other table, by using an outer JOIN clause. Outer JOIN clauses are written as either LEFT JOIN or RIGHT JOIN.

A LEFT JOIN clause returns all the records from the “left” table and only the matching records from the “right” table. In the context of outer JOIN clauses, the left table is the one referenced in the FROM clause, and the right table is any other table referenced after the JOIN statement. The following will show every record from <span class="highlight">table_1</span> and only the matching values from <span class="highlight">table_2</span>. Any values that do not have a match in <span class="highlight">table_2</span> will appear as NULL in the result set:

SELECT table_1.column_1, table_2.column_2 FROM table_1 LEFT JOIN table_2 ON table_1.common_column=table_2.common_column;


A RIGHT JOIN clause functions the same as a LEFT JOIN, but it prints the all the results from the right table, and only the matching values from the left:

SELECT table_1.column_1, table_2.column_2 FROM table_1 RIGHT JOIN table_2 ON table_1.common_column=table_2.common_column;


Combining Multiple SELECT Statements with UNION Clauses

A UNION operator is useful for combining the results of two (or more) SELECT statements into a single result set:

SELECT column_1 FROM table UNION SELECT column_2 FROM table;


Additionally, the UNION clause can combine two (or more) SELECT statements querying different tables into the same result set:

SELECT column FROM table_1 UNION SELECT column FROM table_2;


Conclusion

This guide covers some of the more common commands in SQL used to manage databases, users, and tables, and query the contents held in those tables. There are, however, many combinations of clauses and operators that all produce unique result sets. If you’re looking for a more comprehensive guide to working with SQL, we encourage you to check out Oracle’s Database SQL Reference.

Additionally, if there are common SQL commands you’d like to see in this guide, please ask or make suggestions in the comments below.

Learn More

MySQL Databases With Python Tutorial

SQL vs NoSQL or MySQL vs MongoDB

Building Web App using ASP.NET Web API Angular 7 and SQL Server

Learn NoSQL Databases from Scratch - Complete MongoDB Bootcamp 2019

MongoDB with Python Crash Course - Tutorial for Beginners

An Introduction to Queries in PostgreSQL

The Complete SQL Bootcamp

The Complete Oracle SQL Certification Course

SQL for Newbs: Data Analysis for Beginners

The Ultimate MySQL Bootcamp: Go from SQL Beginner to Expert