Unlock Python Speed: 12 Little‑Known Tricks to Turbocharge Your Code
Python is praised for its clarity but often deemed slow; this article reveals twelve overlooked, sometimes unconventional techniques—from using enumerate instead of range loops to leveraging Numba, Polars, and mypyc—that can dramatically accelerate data pipelines, APIs, and scientific workloads without rewriting code in another language.
If you work with cloud infrastructure, SRE, or data‑science pipelines, you know Python’s elegance and its occasional sluggishness. Most performance bottlenecks stem from how we use the language—inefficient data structures, unnecessary allocations, redundant calculations, or habits carried over from other languages.
01 Stop using range(len(...)) — use enumerate
Replacing manual index handling with enumerate eliminates extra lookups and lets the interpreter optimise better.
# Slow
for i in range(len(my_list)):
value = my_list[i]
# Faster and more Pythonic
for i, value in enumerate(my_list):
...02 Avoid list for large collections — use array , deque , numpy or polars.Series
Lists are flexible but memory‑inefficient for numeric data. Fixed‑type arrays or Rust‑backed Polars provide massive speedups.
from array import array
my_array = array('i', [1, 2, 3, 4]) # Much faster than a list for intsOr use NumPy/Polars:
import numpy as np
arr = np.array([1, 2, 3, 4])
import polars as pl
df = pl.read_csv("data.csv")03 Accelerate pure functions with functools.lru_cache
Memoisation dramatically speeds deterministic functions.
from functools import lru_cache
@lru_cache(maxsize=128)
def expensive_computation(x, y):
...04 Compile Python to machine code with Numba
Applying @jit(nopython=True) can make loops run up to 100× faster without changing algorithmic logic.
from numba import jit
@jit(nopython=True)
def compute(x):
...05 Swap Pandas for Polars for DataFrames
Pandas is single‑threaded; Polars, written in Rust, offers native multithreading and can be ten times faster on large datasets.
import polars as pl
df = pl.read_csv("data.csv")
df = df.filter(pl.col("sales") > 1000)06 Profile before optimizing
Use the built‑in cProfile or line_profiler to locate real bottlenecks.
python -m cProfile my_script.py
pip install line_profiler
@profile
def my_func():
...
kernprof -l my_script.py
python -m line_profiler my_script.py.lprof07 Minimise attribute look‑ups in tight loops
Cache attribute values outside the loop to avoid repeated dictionary look‑ups.
# Bad
for _ in range(1000000):
value = my_object.some_attribute
# Better
attr = my_object.some_attribute
for _ in range(1000000):
value = attr08 Pre‑allocate lists instead of appending
Appending is convenient but costly at scale; allocate the full size first.
# Slow
result = []
for i in range(1000000):
result.append(i)
# Faster
result = [None] * 1000000
for i in range(1000000):
result[i] = i09 Use Pydantic v2 or msgspec for fast data validation
Pydantic v2 is Rust‑backed; msgspec is even faster.
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str
import msgspec
class User(msgspec.Struct):
id: int
name: str10 Prefer generators over materialising full lists
Generators keep memory usage near zero while still providing all results lazily.
def slow():
return [x**2 for x in range(10**6)]
def fast():
return (x**2 for x in range(10**6))11 Optional: Compile with mypyc
If you already use type hints, mypyc can turn modules into C extensions for a noticeable speed boost.
pip install mypy mypyc
mypyc my_module.py12 Final thoughts
Python’s perceived slowness is usually a symptom of sub‑optimal usage rather than the language itself. By exploiting native features, under‑used libraries, and a performance‑first mindset, you can achieve dramatic speedups in production workloads without abandoning Python.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
