Fundamentals 10 min read

Why async/await Doesn’t Give You Concurrency – And How to Make It Work

Although Python’s async/await syntax lets you pause and resume functions, it alone doesn’t run tasks concurrently; you must explicitly schedule coroutines with tools like asyncio.create_task(), gather(), or TaskGroup, and understand when to use sequential versus concurrent execution to avoid common pitfalls.

Code Mala Tang
Code Mala Tang
Code Mala Tang
Why async/await Doesn’t Give You Concurrency – And How to Make It Work

When I first encountered Python async / await, I thought I had unlocked instant concurrency and sprinkled await everywhere, expecting a huge performance boost.

Reality hit hard: my "asynchronous" code still executed step‑by‑step. No speedup, no magic—just sequential execution with extra keywords.

The truth is that async and await only provide the ability to pause and resume . To truly run multiple tasks at the same time you need tools such as asyncio.create_task(), gather() or TaskGroup to schedule them.

First: What Is a Coroutine?

A coroutine is a special function in Python that can pause and resume .

You define it with async def.

You can pause it with await.

The event loop decides when to resume it.

Example:

import asyncio

async def greet():
    print("Hello")
    await asyncio.sleep(1)
    print("World")

Calling greet() returns a coroutine object and does nothing until you give it to the event loop: asyncio.run(greet()) You can think of a coroutine as a TV show you can pause and resume at any time, whereas a normal function is like a movie that plays straight through.

This pause‑and‑resume ability makes async programming powerful: when a coroutine is waiting (e.g., for network or disk I/O) Python can switch to another coroutine instead of staying idle.

Common Misunderstandings

Consider this naïve code:

import asyncio, time

async def fetch_data(url):
    print(f"Fetching {url}")
    await asyncio.sleep(2)
    return f"Data from {url}"

async def main():
    start = time.time()
    result1 = await fetch_data("https://api1.com")
    result2 = await fetch_data("https://api2.com")
    result3 = await fetch_data("https://api3.com")
    end = time.time()
    print(f"Total time: {end - start:.2f}s")

asyncio.run(main())

This takes about 6 seconds . Even though we used async and await, each call waits for the previous one to finish.

Solution: Use create_task() for Scheduling

To truly run independent tasks concurrently you must schedule them:

import asyncio, time

async def fetch_data(url):
    print(f"Fetching {url}")
    await asyncio.sleep(2)
    return f"Data from {url}"

async def main():
    start = time.time()
    task1 = asyncio.create_task(fetch_data("https://api1.com"))
    task2 = asyncio.create_task(fetch_data("https://api2.com"))
    task3 = asyncio.create_task(fetch_data("https://api3.com"))
    results = await asyncio.gather(task1, task2, task3)
    end = time.time()
    print(f"Total time: {end - start:.2f}s")
    print("Results:", results)

asyncio.run(main())

Now all tasks overlap and the program finishes in about 2 seconds , the duration of the slowest task.

Mental Model: Async ≠ Concurrent

async

→ defines a coroutine that can be paused/resumed. await → tells the coroutine to pause until something completes. create_task() → actually puts multiple coroutines into the event loop simultaneously.

Analogy: await is like standing in a single‑file line, while create_task() is like buying several tickets at once and waiting together.

Alternatives to create_task()

asyncio.gather()

(the simplest way to run many tasks):

results = await asyncio.gather(
    fetch_data("https://api1.com"),
    fetch_data("https://api2.com"),
    fetch_data("https://api3.com"),
)
asyncio.as_completed()

(process results as they arrive):

for task in asyncio.as_completed([
    fetch_data("https://api1.com"),
    fetch_data("https://api2.com"),
    fetch_data("https://api3.com"),
]):
    result = await task
    print("Got:", result)
TaskGroup

(Python 3.11+ for structured concurrency):

async with asyncio.TaskGroup() as tg:
    tg.create_task(fetch_data("https://api1.com"))
    tg.create_task(fetch_data("https://api2.com"))
    tg.create_task(fetch_data("https://api3.com"))

Structured concurrency makes the code clearer and safer.

Real‑World Case: Handling API Rate Limits

Concurrency also helps control load. For an API that allows only 10 concurrent requests, you can use a semaphore:

import asyncio, aiohttp
from asyncio import Semaphore

class APIClient:
    def __init__(self, max_concurrent=5):
        self.sem = Semaphore(max_concurrent)

    async def fetch(self, session, uid):
        async with self.sem:
            async with session.get(f"https://api.com/users/{uid}") as resp:
                return await resp.json()

    async def fetch_all(self, uids):
        async with aiohttp.ClientSession() as s:
            tasks = [asyncio.create_task(self.fetch(s, uid)) for uid in uids]
            return await asyncio.gather(*tasks)

client = APIClient(max_concurrent=10)
results = asyncio.run(client.fetch_all(range(50)))

This keeps the program fast while protecting the server.

Common Pitfalls

❌ Sequential awaiting

res1 = await op1()
res2 = await op2()

✅ Concurrent scheduling

res1, res2 = await asyncio.gather(op1(), op2())

❌ Creating tasks too late

t1 = asyncio.create_task(op1())
res1 = await t1

✅ Create all tasks first, then await them together

t1 = asyncio.create_task(op1())
t2 = asyncio.create_task(op2())
res1, res2 = await asyncio.gather(t1, t2)

When to Use Sequential vs. Concurrent

Sequential : when tasks depend on each other.

user = await fetch_user()
settings = await fetch_settings(user.id)

Concurrent : when tasks are independent and I/O‑bound.

user_task = asyncio.create_task(fetch_user())
settings_task = asyncio.create_task(fetch_settings())
user, settings = await asyncio.gather(user_task, settings_task)

Key Takeaways

Coroutines are functions that can pause and resume. async / await ≠ concurrency. await waits; create_task() schedules.

Use gather() or TaskGroup to combine tasks.

Control concurrency with semaphores.

For CPU‑bound work, use threads or processes instead of async.

Final Thoughts

Understanding coroutines and scheduling was a turning point for me. Once I started using create_task() (and later TaskGroup), my async code became truly concurrent, delivering dramatically better performance and clarity.

Next time you sprinkle await everywhere, ask yourself: “Do these tasks really need to run sequentially, or should I schedule them together?” In most cases, create_task() is your best friend. 🚀

concurrencyasync/awaitcoroutineasynciotask-scheduling
Code Mala Tang
Written by

Code Mala Tang

Read source code together, write articles together, and enjoy spicy hot pot together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.