Python Web Scraping Tutorial

Python Web Scraping Tutorial

Web scraping allows us to extract information from web pages. In this tutorial, youll learn how to build web scraping with Python.

Web scraping allows us to extract information from web pages. In this tutorial, youll learn how to build web scraping with Python.

Introduction

If you are into data analysis, big data, machine learning or even AI projects, chances are you are required to collect data from various websites. Python is very commonly used in manipulating and working with data due to its stability, extensive statistical libraries and simplicity (these are just my opinions). We will then use Python to scrape a the trending repositories of GitHub.

Prerequisites

Before we begin this tutorial, please set up Python environment on your machine. Head over to their official page here to install if you have not done so.

In this tutorial I will be using Visual Studio Code as the IDE on a Windows Machine, but feel free to your IDE of choice. If you are using VS Code, follow the instructions here to set up Python support for VS Code.

We will also be installing Beautiful Soup and Request modules from Python in our virtual environment later.

Request library allows us to easily make HTTP requests while BeautifulSoup will make scraping much easier for us.

Tutorial

Let’s first look into what we will be scraping:

GitHub Trending Page

What we will be doing is extracting all the information about the trending repositories such as name, stars, links etc.

Creating the project

Make a folder somewhere on your disk and let’s call it python-scraper-github. Navigate to the folder and let’s first create a virtual environment.

python -m venv env

Wait for this to be completed, and you will realize that this creates a folder called env in the root of our project. This will contain all the necessary packages that Python would need. All the installation of new modules will be installed into this folder.

A virtual environment is a tool that helps to keep dependencies required by different projects separate by creating isolated python virtual environments for them. This is one of the most important tools that most of the Python developers use.
Type code . in the command line to open up the folder in VS Code or just find the folder to open in the main VS Code window.

How our project will look

Press ctrl + shift + p to open up all the commands and select the command Python: Select Interpreter like below and select the env

Choose our env folder as the interpreter

Great, now that you have setup the interpreter, we can start a terminal in our folder. Open up a new terminal by Terminal -> New Terminal. You will see that the first line will be something similar to 

(env) PS E:\Projects\Tutorials\python-scraper-github> 

That is because when we open a new terminal via VS Code, it automatically activates our virtual environment.

Installing Dependencies

While in the terminal, enter the following (pip comes pre-installed with Python 2.7.9 / 3.4 and above) :

pip install requests beautifulsoup4

Now that we are done installing the modules, let’s create a new file and call it scraper-github-trending.py

import requests
from bs4 import BeautifulSoup
# Collect the github pagepage = requests.get('https://github.com/trending')
print(page)

We have imported the libraries, and then make request to get the GitHub trending page. Let’s run this file and see what is the output.

To run a particular python file, right click on the File ->Run Python File In Terminal

<Response [200]>

This will be output we get. Great, response 200 means that the page was fetched successfully. Let’s now use our Beautiful Soup module to create an object. Add the below into the file.

# Create a BeautifulSoup object

soup = BeautifulSoup(page.text, 'html.parser')

print(soup)

Output when running this new file

When we run the file, we can get the entire html page of the GitHub trending page! Let’s now explore how we can extract the useful data.

Extracting data

Highlighted shows ‘repo-list’

Head over to your browser (Chrome in this case) and open up the GitHub Trending Page. Click inspect anywhere, and you can see that the entire body of our wanted data is in the tag <div class="repo-list"> so the class repo-list should be our initial focus.

Each individual repository information

Next, we can see that each of the repositories are defined in the <li class='col-12 d-block width-full py-4 border-bottom'> This is what we will retrieve next

import requests
from bs4 import BeautifulSoup


page = requests.get('https://github.com/trending')

# Create a BeautifulSoup object
soup = BeautifulSoup(page.text, 'html.parser')

# get the repo list
repo = soup.find(class_="repo-list")

# find all instances of that class (should return 25 as shown in the github main page)
repo_list = repo.find_all(class_='col-12 d-block width-full py-4 border-bottom')

print(len(repo_list))

scraper-github-trending.py hosted with ❤ by GitHub

Your code should now look like this. If you run this script now, the output should show 25

Next we will iterate through each of the list to retrieve the desired information.

Repository Name

Highlighted shows the tag that displays full repository name

The above snip shows that the full repository name occurs under the very first <a> tag. We can extract the text from. Since the it returns a string with / in between them, we can split the string using / to get an array of string. First index will have the developer name and the next index will have the repository name.

Number of Stars

Stars are defined using tag with class

Since not all repository contain the number of stars as the first element, we cannot use the position to retrieve the number of stars. However, we can see that the <svg> that defines the star and the number of stars itself are under the same parent. So if we get the <svg> by using the class octicon octicon-star we can get the parent and then extract the text (which will be the number of stars).

For loop

import requests
from bs4 import BeautifulSoup


page = requests.get('https://github.com/trending')

# Create a BeautifulSoup object
soup = BeautifulSoup(page.text, 'html.parser')

# get the repo list
repo = soup.find(class_="repo-list")

# find all instances of that class (should return 25 as shown in the github main page)
repo_list = repo.find_all(class_='col-12 d-block width-full py-4 border-bottom')

print(len(repo_list))

for repo in repo_list:
    # find the first <a> tag and get the text. Split the text using '/' to get an array with developer name and repo name
    full_repo_name = repo.find('a').text.split('/')
    # extract the developer name at index 0
    developer = full_repo_name[0].strip()
    # extract the repo name at index 1
    repo_name = full_repo_name[1].strip()
    # find the first occurance of class octicon octicon-star and get the text from the parent (which is the number of stars)
    stars = repo.find(class_='octicon octicon-star').parent.text.strip()
    # strip() all to remove leading and traling white spaces
    print('developer', developer)
    print('name', repo_name)
print('stars', stars)

scraper-github-trending.py hosted with ❤ by GitHub

I have already implemented the loop as shown above. For each item in our repo_list (which contains 25 items), let’s find the developer, repo name and the stars.

Run the above code and the output should be something like this:

Output showing the 3 field information requested

Great! We can print what we have set out to achieve. Printing is good on its own, but it would be even better if we can store it somewhere, such as on a csv file. So let’s save this information there.

Saving it as CSV

First we need to import the built-in csv module as such:

import csv

Then we need to open a file and write the headers into our csv file:

# Open writer with name

file_name = "github_trending_today.csv"

# set newline to be '' so that that new rows are appended without skipping any

f = csv.writer(open(file_name, 'w', newline=''))

# write a new row as a header

f.writerow(['Developer', 'Repo Name', 'Number of Stars'])

Next, in the for loop, we need to write a new row into our csv file

f.writerow([developer, repo_name, stars])

That is all you need to save the trending information onto our csv file!

import requests
from bs4 import BeautifulSoup
import csv

page = requests.get('https://github.com/trending')

# Create a BeautifulSoup object
soup = BeautifulSoup(page.text, 'html.parser')

# get the repo list
repo = soup.find(class_="repo-list")

# find all instances of that class (should return 25 as shown in the github main page)
repo_list = repo.find_all(class_='col-12 d-block width-full py-4 border-bottom')

print(len(repo_list))

# Open writer with name
file_name = "github_trending_today.csv"
# set newline to be '' so that that new rows are appended without skipping any
f = csv.writer(open(file_name, 'w', newline=''))

# write a new row as a header
f.writerow(['Developer', 'Repo Name', 'Number of Stars'])

for repo in repo_list:
    # find the first <a> tag and get the text. Split the text using '/' to get an array with developer name and repo name
    full_repo_name = repo.find('a').text.split('/')
    # extract the developer name at index 0
    developer = full_repo_name[0].strip()
    # extract the repo name at index 1
    repo_name = full_repo_name[1].strip()
    # find the first occurance of class octicon octicon-star and get the text from the parent (which is the number of stars)
    stars = repo.find(class_='octicon octicon-star').parent.text.strip()
    # strip() all to remove leading and traling white spaces
    print('developer', developer)
    print('name', repo_name)
    print('stars', stars)
    print('Writing rows')
    # add the information as a row into the csv table
f.writerow([developer, repo_name, stars])

This is what our script looks like finally. Once you run in, you will a new file github_trending_today.csv appear in our folder. If you open it it will look like this:

Scraped Information

Great! You have completed a simple tutorial to extract website information using python!

Final Thoughts

The availability of various useful modules makes it incredibly simple for us to scrape data from websites for our projects. However, there is still a lot of work that needs to go into extracting the data accurately and cleaning up the data before it can be used to yield useful results.

Furthermore, if the structure of the website, such as the class names, tags or id change, the script needs to be changed accordingly, thus we need to further think about the maintainability of the script.

I hope this has been useful for those looking to extract various information on your own from scratch! You can create multiple scripts for each web page you wish to scrape, all in the same project.

If anyone finds these useful, feel free to share this or let me know should there be an error / bad practice / implementations.

Mobile App Development Company India | Ecommerce Web Development Company India

Mobile App Development Company India | Ecommerce Web Development Company India

Best Mobile App Development Company India, WebClues Global is one of the leading web and mobile app development company. Our team offers complete IT solutions including Cross-Platform App Development, CMS & E-Commerce, and UI/UX Design.

We are custom eCommerce Development Company working with all types of industry verticals and providing them end-to-end solutions for their eCommerce store development.

Know more about Top E-Commerce Web Development Company

Python Tutorial for Beginners (2019) - Learn Python for Machine Learning and Web Development

Python Tutorial for Beginners (2019) - Learn Python for Machine Learning and Web Development




TABLE OF CONTENT

00:00:00 Introduction

00:01:49 Installing Python

00:06:10 Your First Python Program

00:08:11 How Python Code Gets Executed

00:11:24 How Long It Takes To Learn Python

00:13:03 Variables

00:18:21 Receiving Input

00:22:16 Python Cheat Sheet

00:22:46 Type Conversion

00:29:31 Strings

00:37:36 Formatted Strings

00:40:50 String Methods

00:48:33 Arithmetic Operations

00:51:33 Operator Precedence

00:55:04 Math Functions

00:58:17 If Statements

01:06:32 Logical Operators

01:11:25 Comparison Operators

01:16:17 Weight Converter Program

01:20:43 While Loops

01:24:07 Building a Guessing Game

01:30:51 Building the Car Game

01:41:48 For Loops

01:47:46 Nested Loops

01:55:50 Lists

02:01:45 2D Lists

02:05:11 My Complete Python Course

02:06:00 List Methods

02:13:25 Tuples

02:15:34 Unpacking

02:18:21 Dictionaries

02:26:21 Emoji Converter

02:30:31 Functions

02:35:21 Parameters

02:39:24 Keyword Arguments

02:44:45 Return Statement

02:48:55 Creating a Reusable Function

02:53:42 Exceptions

02:59:14 Comments

03:01:46 Classes

03:07:46 Constructors

03:14:41 Inheritance

03:19:33 Modules

03:30:12 Packages

03:36:22 Generating Random Values

03:44:37 Working with Directories

03:50:47 Pypi and Pip

03:55:34 Project 1: Automation with Python

04:10:22 Project 2: Machine Learning with Python

04:58:37 Project 3: Building a Website with Django


Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Complete Python Bootcamp: Go from zero to hero in Python 3

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python and Django Full Stack Web Developer Bootcamp

Complete Python Masterclass

Python Programming Tutorial | Full Python Course for Beginners 2019 👍

Top 10 Python Frameworks for Web Development In 2019

Python for Financial Analysis and Algorithmic Trading

Building A Concurrent Web Scraper With Python and Selenium

Top 10 Python Frameworks for Web Development In 2019

Top 10 Python Frameworks for Web Development In 2019

In this article, we are going to share our list of the top 10 Python frameworks for web development which we believe will help you to develop awesome applications and your technical abilities.

In this article, we are going to share our list of the top 10 Python frameworks for web development which we believe will help you to develop awesome applications and your technical abilities.

Given how dynamic web development has become, the popularity of Python frameworks seems to be only increasing. This object-oriented, powerfully composed, interpreted, and interactive programming language is easy to **learn **and effectively lessens the development time with its easy-to-read syntax and simple compilation feature. That’s reason enough why it is continuously gaining popularity.

Also, it has a vast number of Python libraries that support data analysis, visualization, and manipulation. Consequently, it has advanced as the most favored programming language and is now considered the “Next Big Thing” for professionals.

Since **Python **does not accompany the built-in features required to accelerate custom web application development, many developers choose Python’s robust collection of frameworks to deal with the subtleties of execution.

**Python **gives a wide scope of frameworks to developers. There are two types of Python frameworks – **Full Stack Framework **and Non-Full Stack Framework. The full-stack frameworks give full support to Python developers including basic components like form generators, form validation, and template layouts.

There is a cluster of full stack options when we talk of Python frameworks. Listed below are the top 10 full-stack web frameworks for **Python **that you should be using in 2019 valuable for enhancing your technical abilities.

Django

Django is a free and open-source Python framework that enables developers to develop complex code and applications effectively and quickly. This high-level framework streamlines web application development by giving different vigorous features. It has a colossal assortment of libraries and underscores effectiveness, less need for coding, and reusability of components.

A few of the key features of Django, such as authentication mechanism, URL routing, template engine, and database schema migration implements ORM (Object Relational Mapper) for mapping its objects to database tables. The framework underpins numerous **databases **including PostgreSQL, MySQL, Oracle, and SQLite, which implies that a similar coding works with various databases.

Django’s cutting-edge features help developers in achieving basic web development tasks like user authentication, RSS feeds, content services, and sitemap. Due to its incredible features, Django framework is extensively used in several high-traffic sites, which include Pinterest, Instagram, Bitbucket, Mozilla, Disqus, and The Washington Times.

CherryPy

**CherryPy **is an open source Python web development framework that implants its very own multi-strung server. It can keep running on any working framework that supports Python. **CherryPy **features incorporate thread-pooled web server, setup framework, and module framework.

A moderate web framework enables you to utilize any sort of technology for data access, templating, etc. Yet, it can do everything that a web framework can, for instance, handling sessions, static, file uploads, cookies, and so on.

Regardless of the accessible features and advantages like running on multiple platforms, built-in support for profiling, reporting, and testing, some developers may imagine that there is a requirement for easy and enhanced documentation. It doesn’t constrain you to use a specific template engine, ORM, so that you can use anything you wish to use.

Pyramid

**Pyramid **is a Python framework that underpins validation and directing. It is incredible for growing huge web applications, as CMSs, and it is valuable for prototyping an idea and for developers chipping away at API projects. Pyramid is adaptable and can be utilized for both easy as well as difficult projects.

Pyramid is enhanced with features without driving a specific method for completing things, lightweight without abandoning you all alone as your app develops. It is a most valued web framework among experienced Python developers by virtue of its transparency and measured quality. It has been used by a moderate team and tech giants like Mozilla, Yelp, Dropbox, and SurveyMonkey.

The pyramid is reliably known for its security arrangements, which makes it easy to set up and check access control records. Another inventive functionality worth uncovering is Pyramid’s Traversal framework for mapping URLs to code, which makes it simple to develop RESTful APIs.

TurboGears

**TurboGears **is an open-source, free, and data-driven full-stack web application Python framework. It is designed to overcome the inadequacies of various extensively used web development frameworks. It empowers software engineers to begin developing web applications with an insignificant setup.

**TurboGears **enables web developers to streamline web application development utilizing diverse JavaScript development tools. You can develop web applications with the help of elements such as SQLAlchemy, Repoze, WebOb, and Genshi, much faster than other existing frameworks. It supports different **databases **and web servers like Pylons.

The framework pursues an MVC (Model-View-Controller) design and incorporates vigorous formats, an incredible Object Relational Mapper (ORM) and Ajax for the server and program. Organizations using TurboGears incorporate Bisque, ShowMeDo, and SourceForge.

Web2Py

**Web2py **is a free, open source Python framework for web application development. The framework accompanies a debugger, code editor as well as a deployment tool to enable you to build and debug the code, as well as test and keep up web applications.

It’s a cross platform framework that underpins Windows, Unix/Linux, Mac, Google App Engine, and different other platforms. It pursues the MVC (Model-View-Controller) design. The framework streamlines web application development procedure via a web server, SQL database, and an online interface. It enables clients to build, revise, deploy, and manage web applications via web browsers.

The key component of Web2py is a ticketing framework, which issues a ticket when a mistake occurs. This encourages the client to follow the mistake and its status. Also, it has in-built components to manage HTTP requests, reactions, sessions, and cookies.

Bottle

Another interesting Python web framework is Bottle, which falls under the class of small-scale frameworks. Originally, it was developed for building web APIs. Also, Bottle tries to execute everything in a single document, which should give you a short perspective on how small it is designed to be.

The out-of-the-box functionalities include templating, utilities, directing, and some fundamental abstraction over the WSGI standard. Like Flask, you will be coding significantly closer to the metal than with a full-stack framework. Regardless of their, Bottle has been used by Netflix to create web interfaces.

Tornado

Tornado is a Python web framework and offbeat framework library. It utilizes a non-blocking framework I/O and unravels the C10k issue (which means that, whenever configured properly, it can deal with 10,000+ simultaneous connections).

Tornado’s main features comprise of built-in support for user confirmation, superior quality, real-time services, non-blocking HTTP customer, Python-based web templating language, and support for interpretation and localization.

This makes it an extraordinary tool for building applications that require superior and a huge number of simultaneous clients.

Flask

Flask is a Python framework accessible under the BSD license, which is inspired by the Sinatra Ruby framework. Flask relies upon the Werkzeug WSGI toolbox and Jinja2 template. The main purpose is to help develop a strong web application base.

**Developers **can **develop backend frameworks **any way they need, however, it was designed for applications that are open-ended. Flask has been used by big companies, which include LinkedIn and Pinterest. As compared to Django, Flask is best suited for small and easy projects. Thus, you can expect a web server development, **support **for Google App Engine as well as in-built unit testing.

Grok

**Grok framework **has been created, depending on Zope toolbox for giving an agile development experience to developers by concentrating on convention over configuration and DRY (Don’t Repeat Yourself). It is an open-source framework, developed to speed up the application development process.

Developers can choose from a wide scope of network and independent libraries as indicated by the task needs. Grok’s UI (user interface) is like other full-stack frameworks such as **Pylons **and TurboGears.

The Grok component architecture helps developers lessen the unpredictability of development by availing views, content objects, and controller. Grok, likewise, provides the building blocks and other essential assets required to develop custom web applications for business needs.

BlueBream

**BlueBream **is also an open source web application framework, server, and library for website developers. It has been developed by the Zope team which was formerly known as Zope 3.

This framework is best suited for both medium and substantial activities apportioned into various re-usable and well-suited segments.

**BlueBream **relies upon Zoop Toolkit (ZTK). It holds extensive periods of experience ensuring that it meets the main essential for enduring, relentless, and adaptable programming.

Conclusion

Though there are many python web development frameworks that will be popular and in-demand in the coming years, especially in 2019, every framework has its own pros and cons. Every developer has different coding styles and preferences. They will assess every framework as per the requirements of an individual task. In this way, the choice of python web development framework will change from one developer onto the next.

The above-listed are some of the Python frameworks that are widely used as a full-stack backend web application development. Which one are you picking for your next project? Do let us know in the comments section given below.