Scale‑PINN Solves High‑Re Navier‑Stokes in 100 seconds, Cutting Error by 96 %
The tutorial introduces Scale‑PINN, which adds an evolutionary regularization term inspired by pseudo‑time stepping to the PINN loss, enabling a shared‑backbone network to solve the lid‑driven cavity Navier‑Stokes problem at Re = 7500 in about 100 seconds and reducing the relative velocity error by roughly 96 % compared with a standard PINN.
Introduction
Physics‑informed neural networks (PINNs) embed PDE residuals in the loss function, enabling mesh‑free solutions. For the 2‑D lid‑driven cavity at Reynolds number 7500, standard PINNs suffer from convection‑dominated gradients, highly non‑convex loss landscapes, and stagnating PDE residuals, leading to relative errors of order 1.
Scale‑PINN Method
Scale‑PINN introduces an evolutionary regularization term that penalizes the difference between the current solution and the solution from the previous training step. This converts the steady problem into a sequence of easier sub‑problems, analogous to pseudo‑time stepping in classical CFD.
Mathematical formulation
Let θ be the current network parameters and θ₀ the parameters from the previous step. The corrected continuity residual becomes res_cont = res_cont + (p - p_0) / ER Momentum residuals are corrected analogously with a diffusion‑adjustment coefficient ER_xx. When ER = 0 the method reduces to the standard PINN.
Connection to classic CFD
Pseudo‑time step ↔ evolutionary coefficient ER Previous time‑step solution ↔ solution from parameters θ₀ Implicit Euler discretisation ↔ parameter update per training step
Network Architecture
A shared backbone extracts low‑dimensional features, followed by three independent branches that predict the velocity components u, v and pressure p. Fourier feature embedding maps the 2‑D coordinates to a 256‑dimensional high‑frequency space, and a sine activation adds periodic non‑linearity.
def get_uvp(x, y):
inp = jnp.hstack([x, y, x-1.0, y-1.0])
hidden = self.feats(inp)
hidden = jnp.sin(2 * jnp.pi * hidden)
for lyr in self.layers:
hidden = lyr(hidden)
# ... branches for u, v, pAutomatic differentiation (JAX jacfwd) computes first‑ and second‑order spatial derivatives required for the PDE residuals. Parameters are flattened with flatten_util.ravel_pytree so the same vector can be fed to both the PINN module (full loss) and a lightweight DNN module (reference solution).
def eval_loss(params, params_0, inputs, labels):
pred = model.apply(unravel_fn(params), inputs)
u, v, p, res_cont, res_mom1, res_mom2, bc, nbc, m_1, m_2 = \
jnp.split(pred, 10, axis=1)
pred0 = model_0.apply(unravel_fn(params_0), inputs)
u_0, v_0, p_0, m0_1, m0_2 = jnp.split(pred0, 5, axis=1)
if ER > 0:
res_cont = res_cont + (p - p_0) / ER
res_mom1 = res_mom1 + (u - u_0) / ER + (m_1 - m0_1) / ER_xx
res_mom2 = res_mom2 + (v - v_0) / ER + (m_2 - m0_2) / ER_xx
pde_uvp = jnp.square(res_cont) + jnp.square(res_mom1) + jnp.square(res_mom2)
pde_loss = jnp.sum(pde_uvp * nbc) / nbc.sum()
# ... add BC loss
return total_lossExperimental Setup
Domain: unit‑square lid‑driven cavity, Reynolds number 7500.
Network: hidden width 64, Fourier feature dimension 256, total parameters 59 520.
Optimizer: Adam with cosine decay (exponent = 1.2).
Total iterations: 50 000.
Standard PINN (M1): evolutionary coefficient ER = 0.
Scale‑PINN (M2): ER = 0.095, ER_xx = 0.5.
Identical random seed (50) and batch sizes (internal = 950, boundary = 50).
Results
Convergence behaviour
Standard PINN loss plateaus after ~5 k steps and the relative velocity error remains between 0.8–1.0. Scale‑PINN exhibits three phases: an initial rapid drop (0–10 k steps) driven by the evolutionary term, a fast convergence phase (10–40 k) as the reference solution improves, and a fine‑tuning phase (40–50 k) with learning‑rate decay.
Flow‑field accuracy
Standard PINN predicts an almost uniform velocity magnitude and a physically meaningless pressure field. Scale‑PINN reproduces the off‑center primary vortex, secondary corner vortices, high pressure near the moving lid and low pressure in vortex cores, matching the reference solution.
Quantitative metrics
Best relative velocity error improvement: 96.7 %.
Final relative velocity error improvement: 96.8 %.
Final MSE reduced by three orders of magnitude.
Training time: 100.4 s (M1) vs 110.0 s (M2), ≈ 9.6 % overhead.
Time‑accuracy efficiency: 33.6 × better for Scale‑PINN.
Discussion
Scale‑PINN effectively embeds a pseudo‑time stepping scheme, turning an “unsolvable” high‑Re steady Navier‑Stokes problem into a solvable sequence of incremental corrections. The additional computational cost is modest (≈10 % extra training time and ≈5 % extra memory for storing the previous parameter vector).
Limitations include sensitivity to the hyper‑parameters ER and ER_xx, the assumption of a unique steady solution, and the current focus on steady PDEs.
Future directions: adaptive schedules for ER, combination with NTK‑based adaptive weighting, multi‑step historical references (BDF‑style), extension to three‑dimensional turbulent and transient problems, and automated hyper‑parameter search via Bayesian optimisation or meta‑learning.
References
[1] Chiu, P.-H. et al. “Scale‑PINN: Learning Efficient Physics‑Informed Neural Networks Through Sequential Correction.” arXiv:2602.19475 .
[2] Raissi, M., Perdikaris, P., & Karniadakis, G. E. “Physics‑informed neural networks.” Journal of Computational Physics , 378, 686–707 (2019).
[4] Ghia, U., Ghia, K. N., & Shin, C. T. “High‑Re solutions for incompressible flow using the Navier‑Stokes equations and a multigrid method.” Journal of Computational Physics , 48(3), 387–411 (1982).
[5] Cao, W. & Zhang, W. “An analysis and solution of ill‑conditioning in physics‑informed neural networks.” Journal of Computational Physics , 520, 113494 (2025).
[6] Bradbury, J. et al. “JAX: Composable transformations of Python+NumPy programs.” http://github.com/jax-ml/jax.
Open‑source implementation: https://github.com/xgxgnpu/Physics-informed-vibe-coding.
AI Agent Research Hub
Sharing AI, intelligent agents, and cutting-edge scientific computing
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
