How to Scrape Baidu Search Results with Python and Regex

This article demonstrates how to use Python's requests library and regular expressions to scrape Baidu search result titles and URLs, save them to a CSV file, and suggests alternative parsers like XPath or BeautifulSoup for further extraction.

Python Crawling & Data Mining
Python Crawling & Data Mining
Python Crawling & Data Mining
How to Scrape Baidu Search Results with Python and Regex

1. Introduction

The author shares a Python script that fetches Baidu search result titles and links using a regular‑expression based extractor. The code is useful when the original extractor breaks due to changes in the page structure.

2. Implementation

The core script is shown below and works out‑of‑the‑box.

# -*- coding: utf-8 -*-
# @Time    : 2022/4/19 0019 18:24
# @Author  : 皮皮:Python共享之家
# @File    : demo.py

import requests
from fake_useragent import UserAgent
import re

def get_web_page(wd, pn):
    url = 'https://www.baidu.com/s'
    ua = UserAgent()
    # print(ua)
    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'User-agent': ua.random,
        'Cookie': 'BAIDUID=C58C4A69E08EF11BEA25E73D71F452FB:FG=1; PSTM=1564970099; BIDUPSID=87DDAF2BDABDA37DCF227077F0A4ADAA; __yjs_duid=1_351e08bd1199f6367d690719fdd523a71622540815502; MAWEBCUID=web_goISnQHdIuXmTRjWmrvZPZVKYQvVAxETmIIzcYfXMnXsObtoEz; MCITY=-%3A; BD_UPN=12314353; BDUSS_BFESS=003VTlGWFZGV0NYZU1FdFBTZnFYMGtPcUs2VUtRSERVTWRNcFM5cmtHaGoyb1ZpRUFBQUFBJCQAAAAAAAAAAAEAAABCyphcYWRkZDgyMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGNNXmJjTV5iT; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; H_PS_PSSID=34813_35915_36166_34584_36120_36195_36075_36125_36226_26350_36300_22160_36061; ab_sr=1.0.1_ODllMjlmYmJlNjY5NzBjYTRkN2VlMDU3ZGI5ODJhNzA4YzllOTM3OTAwMWNmZTFlMTQ3ZmY3MmRlNDYyYWZjNTI5MzcwYmE3MDk0NGNkOGFmYThkN2FlMDdlMzA0ZjY0MmViNWIzNjc0ZjhmZWZmZGJmMTA3MGI5ZGM5MDM4NmQ3MWI0ZDUyMDljZWU4ZDExZjA1ZTg5MDYyYmNiNDc4ODFkOTQ2MmYxN2EwYTgwOTFlYTRlZjYzMmYwNzQ0ZDI3; BAIDUID_BFESS=C58C4A69E08EF11BEA25E73D71F452FB:FG=1; delPer=0; BD_CK_SAM=1; PSINO=1; H_PS_645EC=c87aPHArHVd30qt4cjwBEzjR%2BwqcUnQjjApbQetZm98YZVXUtN%2FOXOxNv3A; BA_HECTOR=25a0850k0l8h002kio1h5v7ud0q; baikeVisitId=61a414fd-dde7-41c2-9aa5-aa8044420d33',
        'Host': 'www.baidu.com'
    }
    params = {
        'wd': wd,
        'pn': pn
    }
    response = requests.get(url, headers=headers, params=params)
    response.encoding = 'utf-8'
    # print(response.text)
    response = response.text
    return response

def parse_page(response):
    ex = '"title":"(?P<title>.*?)".*?"titleUrl":"(?P<title_url>.*?)"'
    titles = re.findall(ex, response)
    data = []
    nub = 0
    for title in titles:
        title = '","'.join(title)
        title = title.replace('<em>', '').replace('</em>', '')
        if title.startswith('\\u00'):
            continue
        nub += 1
        data.append(title)
        print(title)
    print(f"当前页一共有{nub}条标题和网址的信息!")
    return data

def save_data(datas, kw, page):
    for data in datas:
        with open(f'./百度{kw}的第{page}页的数据.csv', 'a', encoding='utf-8') as fp:
            fp.write(data + '
')
    print(f"百度{kw}的第{page}页的数据已经成功保存!")

def main():
    kw = input("请输入要查询的关键词:").strip()
    page = input("请输入要查询的页码:").strip()
    page_pn = int(page)
    page_pn = str(page_pn * 10 - 10)
    resp = get_web_page(kw, page_pn)
    datas = parse_page(resp)
    save_data(datas, kw, page)

if __name__ == '__main__':
    main()

The script prints each extracted title and finally reports the total number of titles found on the current page. After execution, a CSV file is generated locally containing the collected titles.

The generated CSV file stores each title on a separate line, as shown in the second screenshot.

3. Conclusion

The author notes that while this example uses regular expressions, alternatives such as xpath or bs4 can also be employed for extraction. Future articles will demonstrate using bs4 to scrape Baidu keywords and links.

PythonCSVregexweb-scrapingBaidu
Python Crawling & Data Mining
Written by

Python Crawling & Data Mining

Life's short, I code in Python. This channel shares Python web crawling, data mining, analysis, processing, visualization, automated testing, DevOps, big data, AI, cloud computing, machine learning tools, resources, news, technical articles, tutorial videos and learning materials. Join us!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.