Build a Python Video Downloader for Bulk Media Extraction

This article walks through creating a Python script that parses web pages, extracts video URLs, and downloads multiple videos efficiently with progress tracking and buffering, providing complete code examples and implementation steps.

Python Crawling & Data Mining
Python Crawling & Data Mining
Python Crawling & Data Mining
Build a Python Video Downloader for Bulk Media Extraction

Project Background

Many people enjoy streaming videos, but network issues can prevent access, prompting the need to download videos for offline viewing.

Project Goal

Use a Python program to batch‑download videos of interest, avoiding the complexity of learning new languages or tools.

Implementation Steps

1. Analyze the web page structure

Inspect the page and locate the a tags with class videoDown, which contain the video URLs.

# Parse page
def parser():
    ab = []
    rep = requests.get('http://v.u00.cn:93/iappce.htm#sp', timeout=5, headers=headers)
    rep.encoding = 'utf-8'
    soup = BeautifulSoup(rep.text, 'html.parser')
    res = soup.find_all('a', class_='videoDown')  # find all a tags with class videoDown
    for y in res:
        ab.append('http://v.u00.cn:93' + y.attrs['href'])  # add full video URL to list
    return ab  # return list of video URLs

2. Download files

Implement a function that downloads a single video, handling filename extraction and file writing.

# Download function
def down(y, x):
    print('------Downloading', str(x), '------')
    ss = str(y.split('.')[3:4])  # extract filename part
    sa = ss.replace('[', '').replace(']', '')  # clean filename
    ree = requests.get(y)
    with open('%d.%s.mp4' % (x, sa), 'wb') as f:
        f.write(ree.content)  # save file

3. Get video size and add buffering

Retrieve the video size from the response headers and use a range request with a progress bar to avoid overwhelming the CPU.

def download(url, file_name):  # download video
    urllib3.disable_warnings()
    rep = requests.get(url, headers=headers)
    head = rep.headers
    length = head.get('Content-Length')  # size in bytes
    file_size = int(length)
    if os.path.exists(file_name):
        first_byte = os.path.getsize(file_name)
    else:
        first_byte = 0
    if first_byte >= file_size:
        return file_size
    header = {"Range": "bytes=%s-%s" % (first_byte, file_size),
              'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
    pbar = tqdm(total=file_size, initial=first_byte, unit='B', unit_scale=True,
                desc=url.split('/')[-1])
    with closing(requests.get(url, headers=header, stream=True)) as req:
        with open(file_name, 'wb') as f:
            for chunk in req.iter_content(chunk_size=1024*2):
                if chunk:
                    pbar.set_description("【Downloading video %s】" % str(f.name))
                    f.write(chunk)
                    pbar.update(1024)
            pbar.close()
    return file_size

4. Combine functions and run

def fd():  # download with progress bar
    global x
    x = 1
    for y in parser():
        print('----Downloading', x, '----')
        ss = str(y.split('.')[3:4])
        sa = ss.replace('[', '').replace(']', '')
        download(y, "{}.{}.mp4".format(str(x), sa))
        print('----Completed', x, '----')
        x += 1

fd()

Conclusion

The script successfully downloads multiple video files, though it lacks multithreading, multiprocessing, or asynchronous features, which could be added for better performance.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythontqdmbeautifulsoupVideo Downloadbatch-download
Python Crawling & Data Mining
Written by

Python Crawling & Data Mining

Life's short, I code in Python. This channel shares Python web crawling, data mining, analysis, processing, visualization, automated testing, DevOps, big data, AI, cloud computing, machine learning tools, resources, news, technical articles, tutorial videos and learning materials. Join us!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.