Fundamentals 9 min read

10 Hidden Python Tricks to Supercharge Performance

This article reveals ten often‑overlooked Python performance techniques—from using enumerate and array structures to leveraging Numba, Polars, and generators—showing how careful coding, profiling, and modern libraries can turn sluggish scripts into lightning‑fast production workloads.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
10 Hidden Python Tricks to Supercharge Performance

Most Python performance problems stem from developer choices rather than the language itself; inefficient data structures, unnecessary allocations, redundant calculations, and habits carried over from other languages are common culprits.

Performance illustration
Performance illustration

1. Stop using range(len(...)) – use enumerate

# Slow
for i in range(len(my_list)):
    value = my_list[i]
# Faster and more Pythonic
for i, value in enumerate(my_list):
    ...

Using enumerate eliminates repeated index lookups and enables better interpreter optimizations.

2. Replace large lists with array , deque or numpy

from array import array
my_array = array('i', [1, 2, 3, 4])  # Much faster than list for ints

For numeric data, prefer numpy or Polars Series, which are orders of magnitude faster than plain lists.

3. Speed up pure functions with functools.lru_cache

from functools import lru_cache

@lru_cache(maxsize=128)
def expensive_computation(x, y):
    ...

Memoization can dramatically improve performance in machine‑learning pipelines where the same inputs produce the same outputs.

4. Compile Python with Numba

from numba import jit

@jit(nopython=True)
def compute(x):
    ...

JIT‑compiled functions can run up to 100× faster, especially for loops and numeric code.

5. Switch Pandas to Polars for DataFrames

import polars as pl
df = pl.read_csv("data.csv")
df = df.filter(pl.col("sales") > 1000)
Polars

, written in Rust, can be ten times or more faster than Pandas on large datasets.

6. Profile before optimizing

python -m cProfile my_script.py
# or using line_profiler
pip install line_profiler
kernprof -l my_script.py
python -m line_profiler my_script.py.lprof

Profiling reveals the real bottlenecks; you cannot improve what you cannot measure.

7. Minimize attribute lookups in tight loops

# Bad
for _ in range(1000000):
    value = my_object.some_attribute
# Better
attr = my_object.some_attribute
for _ in range(1000000):
    value = attr

Each attribute access is a dictionary lookup; caching the value saves time in large loops.

8. Pre‑allocate lists instead of appending

# Slow
result = []
for i in range(1000000):
    result.append(i)
# Faster
result = [None] * 1000000
for i in range(1000000):
    result[i] = i

Pre‑allocation avoids repeated resizing overhead and is crucial for data‑preprocessing pipelines.

9. Use Pydantic V2 (or msgspec ) for fast data validation

from pydantic import BaseModel

class User(BaseModel):
    id: int
    name: str
msgspec

, implemented in Rust, outperforms both Pydantic and traditional dataclasses.

10. Prefer generators over lists

def slow():
    return [x**2 for x in range(10**6)]

def fast():
    return (x**2 for x in range(10**6))

Generators reduce memory consumption from gigabytes to near zero.

Additional tip: Compile with mypy‑c

pip install mypy mypyc
mypyc my_module.py

Compiling type‑annotated modules to C extensions can further boost numeric code performance.

In practice, applying these suggestions can shrink minutes‑long processing times to seconds without rewriting code in another language; the key is to treat Python as the powerful tool it is and use its native features wisely.

PerformanceOptimizationPythonprofilingGeneratorsNumbaPydanticPolars
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.