Master Python Web Scraping with Selenium and Beautiful Soup: Step‑by‑Step Guide
This tutorial explains how to build a Python web scraper using Selenium and Beautiful Soup, covering setup, login automation, HTML extraction, data parsing with html5lib, handling anti‑scraping measures, and best practices for large‑scale data collection.
What Is Web Scraping?
Web scraping is the process of extracting information from websites. An HTML page is a nested markup tree rooted at the <html> tag, with elements that have parent‑child relationships.
The HTML tree can be visualized as shown below:
Web scraping traverses this tree to locate the nodes containing the desired data, converting unstructured HTML into structured information suitable for databases or spreadsheets.
Tools Used
We will use the Beautiful Soup library to parse HTML and the Selenium library to automate browser interactions such as logging in and navigating to target pages.
Code Walkthrough
First, import the required libraries:
# Import libraries
from selenium import webdriver
from bs4 import BeautifulSoupSet up a headless Chrome driver:
# Path to Chrome driver
chromedriver = '/usr/local/bin/chromedriver'
options = webdriver.ChromeOptions()
options.add_argument('headless') # open a headless browser
browser = webdriver.Chrome(executable_path=chromedriver, chrome_options=options)Navigate to the login page and locate the input fields:
# Open login page
browser.get('http://playsports365.com/default.aspx')
# Locate elements by name
email = browser.find_element_by_name('ctl00$MainContent$ctlLogin$_UserName')
password = browser.find_element_by_name('ctl00$MainContent$ctlLogin$_Password')
login = browser.find_element_by_name('ctl00$MainContent$ctlLogin$BtnSubmit')Enter credentials and submit the form:
# Fill in credentials
email.send_keys('********')
password.send_keys('*******')
# Click submit button
login.click()After a successful login, navigate to the target page and retrieve the HTML source:
# Go to the desired page after login
browser.get('http://playsports365.com/wager/OpenBets.aspx')
# Get HTML content
requiredHtml = browser.page_sourceParse the HTML with Beautiful Soup and html5lib, then extract the first table:
soup = BeautifulSoup(requiredHtml, 'html5lib')
table = soup.findChildren('table')
my_table = table[0]Iterate over rows and cells to print the extracted values:
# Receive tags and print values
rows = my_table.findChildren(['th', 'tr'])
for row in rows:
cells = row.findChildren('td')
for cell in cells:
value = cell.text
print(value)Install the required packages with pip (Selenium, Beautiful Soup, html5lib) before running the script.
Advanced Topics
For sites that update frequently, schedule the scraper with a cron job. To avoid being blocked (e.g., 403 errors), rotate user‑agents, use random delays, and employ proxy services such as Tor or commercial proxy providers.
Using these techniques, you can reliably collect large volumes of data from websites that do not provide public APIs.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
MaGe Linux Operations
Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
