Getting started with beautifulsoup

Download beautifulsoup eBook

Remarks

In this section, we discuss what Beautiful Soup is, what it is used for and a brief outline on how to go about using it.

Beautiful Soup is a Python library that uses your pre-installed html/xml parser and converts the web page/html/xml into a tree consisting of tags, elements, attributes and values. To be more exact, the tree consists of four types of objects, Tag, NavigableString, BeautifulSoup and Comment. This tree can then be "queried" using the methods/properties of the BeautifulSoup object that is created from the parser library.

Your need : Often, you may have one of the following needs :

  1. You might want to parse a web page to determine, how many of what tags are found, how many elements of each tag are found and their values. You might want to change them.

  2. You might want to determine element names and values, so that you can use them in conjunction with other libraries for web page automation, such as Selenium.

  3. You might want to transfer/extract data shown in a web page to other formats, such as a CSV file or to a relational database such as SQLite or mysql. In this case, the library helps you with the first step, of understanding the structure of the web page, although you will be using other libraries to do the act of transfer.

  4. You might want to find out how many elements are styled with a certain CSS style and which ones.

Sequence for typical basic use in your Python code:

  1. Import the Beautiful Soup library

  2. Open a web page or html-text with the BeautifulSoup library, by mentioning which parser to be used. The result of this step is a BeautifulSoup object. (Note: This parser name mentioned, must be installed already as part of your Python pacakges. For instance, html.parser, is an in-built, 'with-batteries' package shipped with Python. You could install other parsers such as lxml or html5lib. )

  3. "Query" or search the BeautifulSoup object using the syntax 'object.method' and obtain the result into a collection, such as a Python dictionary. For some methods, the output will be a simple value.

  4. Use the result from the previous step to do whatever you want to do with it, in rest of your Python code. You can also modify the element values or attribute values in the tree object. Modifications don't affect the source of the html code, but you can call output formatting methods (such as prettify) to create new output from the BeautifulSoup object.

Commonly used methods: Typically, the .find and .find_all methods are used to search the tree, giving the input arguments.

The input arguments are : the tag name that is being sought, attribute names and other related arguments. These arguments could be presented as : a string, a regular expression, a list or even a function.

Common uses of the BeautifulSoup object include :

  1. Search by CSS class
  2. Search by Hyperlink address
  3. Search by Element Id, tag
  4. Search by Attribute name. Attribute value.

If you have a need to filter the tree with a combination of the above criteria, you could also write a function that evaluates to true or false, and search by that function.

Versions

VersionRemarksPackage nameRelease date
3.xVersion 3.2.1; Python 2 onlybeautifulsoup2012-02-16
4.xVersion 4.5.0; Python 2 and 3beautifulsoup42016-07-20

A BeautifulSoup "Hello World" scraping example

from bs4 import BeautifulSoup
import requests

main_url = "https://fr.wikipedia.org/wiki/Hello_world"
req = requests.get(main_url)
soup = BeautifulSoup(req.text, "html.parser")

# Finding the main title tag.
title = soup.find("h1", class_ = "firstHeading")
print title.get_text()

# Finding the mid-titles tags and storing them in a list.
mid_titles = [tag.get_text() for tag in soup.find_all("span", class_ = "mw-headline")]

# Now using css selectors to retrieve the article shortcut links
links_tags = soup.select("li.toclevel-1")
for tag in links_tags:
    print tag.a.get("href")

# Retrieving the side page links by "blocks" and storing them in a dictionary
side_page_blocks = soup.find("div",
                            id = "mw-panel").find_all("div",
                                                      class_ = "portal")
blocks_links = {}
for num, block in enumerate(side_page_blocks):
    blocks_links[num] = [link.get("href") for link in block.find_all("a", href = True)]

print blocks_links[0]
 

Output:

"Hello, World!" program
#Purpose
#History
#Variations
#See_also
#References
#External_links
[u'/wiki/Main_Page', u'/wiki/Portal:Contents', u'/wiki/Portal:Featured_content', u'/wiki/Portal:Current_events', u'/wiki/Special:Random', u'https://donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&utm_medium=sidebar&utm_campaign=C13_en.wikipedia.org&uselang=en', u'//shop.wikimedia.org']
 

Entering your prefered parser when instanciating Beautiful Soup avoids the usual Warning declaring that no parser was explicitely specified .

Different methods can be used to find an element within the webpage tree.

Although a handful of other methods exist, CSS classes and CSS selectors are two handy ways to find elements in the tree.

It should be noted that we can look for tags by setting their attribute value to True when searching them.

get_text() allows us to retrieve text contained within a tag. It returns it as a single Unicode string. tag.get("attribute") allows to get a tag's attribute value.

Installation or Setup

pip may be used to install BeautifulSoup. To install Version 4 of BeautifulSoup, run the command:

pip install beautifulsoup4
 

Be aware that the package name is beautifulsoup4 instead of beautifulsoup , the latter name stands for old release, see old beautifulsoup

Stats

103 Contributors: 7
Friday, May 19, 2017
Licensed under: CC-BY-SA

Not affiliated with Stack Overflow
Rip Tutorial: info@zzzprojects.com

Download eBook