So how do you scrape data from the web?
Have you ever copied and pasted information from websites?
If yes, I would say you’ve already performed web-scraping in a way. But you can’t really copy and paste for say about a 100 times or even more, can you?
So let’s see how Python helps us do the same with the help of one of it’s packages – BeautifulSoup.
Some websites like Twitter and Facebook provide APIs for easy connectivity and access to their data. But some don’t, so you’ll have to write a code to navigate through and extract it’s content.
Remember, not every website is cool with you scraping their content. So make sure you’re aware about the website’s terms and conditions.
You can take a look at the website’s permissions by appending it’s URL with ‘/robots.txt’.
robots.txt_ file is known as the Robots Exclusion Protocol._
We’ll scrape-
Number of COVID-19 Cases for each Country-
It’s important for you to know the site’s structure to extract information that you’re interested in. Find out the html tags in which data that needs to be scraped is present.
Right click on the website and then click on inspect.
To understand and inspect the content, you need to know few HTML tags that are commonly used.
headings |
paragraphs
These tags can further have attributes like class, id, src, title, etc.
Inspecting the website mentioned earlier, highlighted in pink are the tags we’ll be extracting data from.
- Step 3- Get the site’s HTML code in your Python script.
We’ll use the requests library to send an HTTP request to the website. The server will respond with HTML content of the page.
import requests response = requests.get("https://en.wikipedia.org/wiki/COVID-19_pandemic")
Let’s check if the request was successful or not.
response.status_code
_Output- _200
Status code starting with 2 generally indicates success and codes starting with 4 or 5 indicates an error.
response.content
The response obtained will look similar to the HTML content you inspected.
- Step 4- Parse HTML data with BeautifulSoup
The HTML content looks complex and confusing due to nested tags and multiple attributes. We now need BeautifulSoup to simplify our task.
BeautifulSoup is a python package for parsing HTML and XML documents. It creates parse trees and makes extracting data easy.
Let’s first import the BeautifulSoup package and create it’s object ‘soup’.
from bs4 import BeautifulSoup soup = BeautifulSoup(response.content, 'html.parser') soup.prettify()
**prettify()**function helps us view the manner in which the tags are nested.
tables | table rows | table cells
table headers
#web-scraping #data-science #data #machine-learning #covid19 #data analysis