Solving Differential Equations with Physics‑Informed Neural Networks in PyTorch

This article explains how to build a Physics‑Informed Neural Network (PINN) in PyTorch to solve a simple logistic ordinary differential equation, covering the underlying theory, loss formulation with equation residuals and boundary conditions, network architecture, automatic differentiation, and training results.

Code DAO
Code DAO
Code DAO
Solving Differential Equations with Physics‑Informed Neural Networks in PyTorch

Physics‑Informed Neural Networks (PINNs) have emerged in the scientific machine‑learning community as a way to solve partial and ordinary differential equations (PDE/ODE) by embedding the governing physics directly into the training loss. The article first reviews the rapid adoption of artificial neural networks in fields such as computer vision and natural language processing, then introduces PINNs as a promising alternative to traditional finite‑element methods, especially when external data (e.g., sensor measurements) are available.

To illustrate the method, the logistic growth equation – a classic 19th‑century population model – is chosen. The equation is expressed as

logistic equation
logistic equation

with the initial condition at

boundary condition
boundary condition

. Although an analytical solution exists, the example serves to demonstrate the PINN workflow, which can be extended to more complex ODEs and PDEs.

The method relies on two fundamental properties of neural networks: (1) the universal approximation theorem, which guarantees that a sufficiently deep network can approximate any continuous function, and (2) automatic differentiation (AD), which provides exact derivatives of the network output with respect to its inputs. These properties allow the network output f_NN(t) to be substituted into the differential equation and its residual to be computed automatically.

The loss is constructed from two parts. First, the equation residual is evaluated at a set of collocation points and squared (mean‑squared error). This is shown in the image

DE residual
DE residual

. Second, the boundary condition contribution is added in the same way, illustrated by

boundary loss
boundary loss

. The total loss combines both terms as depicted in

total loss
total loss

.

The neural network architecture used in the example consists of an input layer, a configurable number of hidden layers with tanh activation, and a linear output layer. The implementation in PyTorch is shown below:

from torch import nn

class NNApproximator(nn.Module):
    """Simple neural network accepting one feature as input and returning a single output.
    In the context of PINNs, the neural network is used as a universal function approximator
    to approximate the solution of the differential equation."""
    def __init__(self, num_hidden: int, dim_hidden: int, act=nn.Tanh()):
        super().__init__()
        self.layer_in = nn.Linear(1, dim_hidden)
        self.layer_out = nn.Linear(dim_hidden, 1)
        num_middle = num_hidden - 1
        self.middle_layers = nn.ModuleList([nn.Linear(dim_hidden, dim_hidden) for _ in range(num_middle)])
        self.act = act
    def forward(self, x):
        out = self.act(self.layer_in(x))
        for layer in self.middle_layers:
            out = self.act(layer(out))
        return self.layer_out(out)

To compute the loss, the article defines helper functions that use PyTorch's autograd engine to obtain arbitrary‑order derivatives:

import torch

def f(nn: NNApproximator, x: torch.Tensor) -> torch.Tensor:
    """Compute the value of the approximate solution from the NN model"""
    return nn(x)

def df(nn: NNApproximator, x: torch.Tensor = None, order: int = 1) -> torch.Tensor:
    """Compute neural network derivative with respect to the input feature(s) using PyTorch autograd engine"""
    df_value = f(nn, x)
    for _ in range(order):
        df_value = torch.autograd.grad(
            df_value,
            x,
            grad_outputs=torch.ones_like(x),
            create_graph=True,
            retain_graph=True,
        )[0]
    return df_value

Collocation points are chosen uniformly in the time domain, e.g., t = torch.linspace(0, 1, steps=10, requires_grad=True). The interior loss and boundary loss are then assembled:

T0 = 0.0  # initial time
F0 = 1.0  # boundary condition value

# DE contribution
interior_loss = df(nn, x) - R * x * (1 - x)

# boundary contribution
boundary = torch.Tensor([T0])
boundary.requires_grad = True
boundary_loss = f(nn, boundary) - F0

final_loss = interior_loss.pow(2).mean() + boundary_loss ** 2

Training proceeds with a standard optimizer; the article uses stochastic gradient descent with a learning rate of 0.1 for 20 000 epochs on 10 collocation points. After back‑propagation ( final_loss.backward()), the network converges to the analytical logistic solution, as shown in the result plot

solution plot
solution plot

and the corresponding loss curve

loss curve
loss curve

. The author notes that using the Adam optimizer can achieve comparable accuracy with fewer epochs, but the focus remains on illustrating the PINN mechanism rather than optimizer benchmarking.

Finally, the article emphasizes that while the logistic example is trivial, the PINN framework is highly flexible: additional boundary conditions, higher‑order derivatives, or multi‑input networks can be incorporated by extending the loss formulation in the same manner.

References

[1] Kurt Hornik, Maxwell Stinchcombe and Halbert White, Multilayer feedforward networks are universal approximators , Neural Networks 2 , 359–366 (1989).

[2] George Em Karniadakis et al., Physics informed machine learning , Nature Reviews Physics 3 , 422–440 (2021).

[3] Ben Mosley, “So, what is a physics‑informed neural network?”

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PyTorchautomatic differentiationdifferential equationsphysics-informed neural networksPINNlogistic equation
Code DAO
Written by

Code DAO

We deliver AI algorithm tutorials and the latest news, curated by a team of researchers from Peking University, Shanghai Jiao Tong University, Central South University, and leading AI companies such as Huawei, Kuaishou, and SenseTime. Join us in the AI alchemy—making life better!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.