Backend Development 7 min read

Understanding Python Coroutines and Asynchronous HTTP Requests with httpx

This article introduces Python coroutines, compares them with multithreading, explains when to use them, and demonstrates how the httpx library can perform asynchronous HTTP requests to dramatically improve backend performance, including installation steps and a benchmark against synchronous requests.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Understanding Python Coroutines and Asynchronous HTTP Requests with httpx

Recently, a company's Python backend project was refactored to use asynchronous coroutines, prompting the author to explore async programming in depth.

What is a coroutine? A coroutine is a lightweight, user-space thread that runs on top of the operating system thread, offering lower overhead and finer control.

Why prefer coroutines over multithreading? Coroutines keep control in user space, reducing context switches, use far less stack memory (≈1 KB vs 1 MB for threads), and avoid the need for locking because they run on a single thread.

Suitable and unsuitable scenarios Coroutines excel in I/O‑bound, highly concurrent tasks, but are not ideal for CPU‑intensive workloads where traditional multithreading or multiprocessing is preferable.

Introducing httpx httpx is an HTTP client library that extends the popular requests API with full asynchronous support, making it a natural choice for coroutine‑based code.

Installation

<code>pip install httpx</code>

Benchmark: synchronous vs asynchronous requests

Using httpx.get synchronously to request Baidu 200 times took about 16.6 seconds.

<code>import asyncio
import httpx
import threading
import time

def sync_main(url, sign):
    response = httpx.get(url).status_code
    print(f'sync_main: {threading.current_thread()}: {sign} {response}')

sync_start = time.time()
[sync_main(url='http://www.baidu.com', sign=i) for i in range(200)]
sync_end = time.time()
print(sync_end - sync_start)
</code>

The asynchronous version, using httpx.AsyncClient with async/await , completed the same 200 requests in roughly 4.5 seconds, a speed‑up of about 73 %.

<code>import asyncio
import httpx
import threading
import time

client = httpx.AsyncClient()

async def async_main(url, sign):
    response = await client.get(url)
    status_code = response.status_code
    print(f'async_main: {threading.current_thread()}: {sign}: {status_code}')

loop = asyncio.get_event_loop()
tasks = [async_main(url='http://www.baidu.com', sign=i) for i in range(200)]
async_start = time.time()
loop.run_until_complete(asyncio.wait(tasks))
async_end = time.time()
loop.close()
print(async_end - async_start)
</code>

These results demonstrate that adopting coroutines and the httpx library can significantly reduce I/O latency in Python backend services, while also providing a solid foundation for building more efficient testing frameworks.

backendPerformancePythonasyncCoroutinehttpx
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.