API Reference

neurodiffeq.neurodiffeq

neurodiffeq.neurodiffeq.diff(u, t, order=1, shape_check=True)

The derivative of a variable with respect to another. diff defaults to the behaviour of safe_diff.

Parameters:
  • u (torch.Tensor) – The \(u\) in \(\displaystyle\frac{\partial u}{\partial t}\).
  • t (torch.Tensor) – The \(t\) in \(\displaystyle\frac{\partial u}{\partial t}\).
  • order (int) – The order of the derivative, defaults to 1.
  • shape_check (bool) – Whether to perform shape checking or not, defaults to True (since v0.2.0).
Returns:

The derivative evaluated at t.

Return type:

torch.Tensor

neurodiffeq.neurodiffeq.safe_diff(u, t, order=1)

The derivative of a variable with respect to another. Both tensors must have a shape of (n_samples, 1) See this issue comment for details

Parameters:
  • u (torch.Tensor) – The \(u\) in \(\displaystyle\frac{\partial u}{\partial t}\).
  • t (torch.Tensor) – The \(t\) in \(\displaystyle\frac{\partial u}{\partial t}\).
  • order (int) – The order of the derivative, defaults to 1.
Returns:

The derivative evaluated at t.

Return type:

torch.Tensor

neurodiffeq.neurodiffeq.unsafe_diff(u, t, order=1)

The derivative of a variable with respect to another. While there’s no requirement for shapes, errors could occur in some cases. See this issue for details

Parameters:
  • u (torch.Tensor) – The \(u\) in \(\displaystyle\frac{\partial u}{\partial t}\).
  • t (torch.Tensor) – The \(t\) in \(\displaystyle\frac{\partial u}{\partial t}\).
  • order (int) – The order of the derivative, defaults to 1.
Returns:

The derivative evaluated at t.

Return type:

torch.Tensor

neurodiffeq.networks

neurodiffeq.conditions

class neurodiffeq.conditions.BaseCondition

Bases: object

Base class for all conditions.

A condition is a tool to re-parameterize the output(s) of a neural network. such that the re-parameterized output(s) will automatically satisfy initial conditions (ICs) and boundary conditions (BCs) of the PDEs/ODEs that are being solved.

Note

  • The nouns (re-)parameterization and condition are used interchangeably in the documentation and library.
  • The verbs (re-)parameterize and enforce are different in that:
    • (re)parameterize is said of network outputs;
    • enforce is said of networks themselves.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, *input_tensors)

Re-parameterizes output(s) of a network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • input_tensors (torch.Tensor) – Inputs to the neural network; i.e., sampled coordinates; i.e., independent variables.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

Note

This method is abstract for BaseCondition

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.BundleIVP(t_0=None, u_0=None, u_0_prime=None, bundle_conditions={})

Bases: neurodiffeq.conditions.BaseCondition

An initial value problem of one of the following forms:

  • Dirichlet condition: \(u(t_0,\boldsymbol{\theta})=u_0\).
  • Neumann condition: \(\displaystyle\frac{\partial u}{\partial t}\bigg|_{t = t_0}(\boldsymbol{\theta}) = u_0'\).

Here \(\boldsymbol{\theta}=(\theta_{1},\theta_{2},...,\theta_{n})\in\mathbb{R}^n\), where each \(\theta_i\) represents a parameter, or a condition, of the ODE system that we want to solve.

Parameters:
  • t_0 (float) – The initial time.
  • u_0 (float) – The initial value of \(u\). \(u(t_0,\boldsymbol{\theta})=u_0\).
  • u_0_prime (float, optional) – The initial derivative of \(u\) w.r.t. \(t\). \(\displaystyle\frac{\partial u}{\partial t}\bigg|_{t = t_0}(\boldsymbol{\theta}) = u_0'\). Defaults to None.
  • bundle_conditions (dict{str: int, .., str: int}) – The initial conditions that will be included in the total bundle, in addition to the parameters of the ODE system. The values asociated with their respective keys used in bundle_conditions (e.g bundle_conditions={‘t_0’: 0, ‘u_0’: 1, ‘u_0_prime’: 2}), must reflect the index of the tuple used in theta_min and theta_max in neurodiffeq.solvers.BundleSolver1D, (e.g theta_min=(t_0_min, u_0_min, u_0_prime_min)). Defaults to {}
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, t, *theta)

Re-parameterizes outputs such that the Dirichlet/Neumann condition is satisfied.

if t_0 is not included in the bundle:

  • For Dirichlet condition, the re-parameterization is \(\displaystyle u(t,\boldsymbol{\theta}) = u_0 + \left(1 - e^{-(t-t_0)}\right)\) \(\mathrm{ANN}(t,\boldsymbol{\theta})\)
  • For Neumann condition, the re-parameterization is \(\displaystyle u(t,\boldsymbol{\theta}) = u_0 + (t-t_0) u'_0 + \left(1 - e^{-(t-t_0)}\right)^2\) \(\mathrm{ANN}(t,\boldsymbol{\theta})\)

if t_0 is included in the bundle:

  • For Dirichlet condition, the re-parameterization is \(\displaystyle u(t,\boldsymbol{\theta}) = u_0 + \left(t - t_0\right)\) \(\mathrm{ANN}(t,\boldsymbol{\theta})\)
  • For Neumann condition, the re-parameterization is \(\displaystyle u(t,\boldsymbol{\theta}) = u_0 + (t-t_0) u'_0 + \left(t - t_0\right)^2\) \(\mathrm{ANN}(t,\boldsymbol{\theta})\)

Where \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • t (torch.Tensor) – First input to the neural network; i.e., sampled time-points; i.e., independent variables.
  • theta (tuple[torch.Tensor, .., torch.Tensor]) – Rest of the inputs to the neural network; i.e., sampled bundle-points
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.DirichletBVP(t_0, u_0, t_1, u_1)

Bases: neurodiffeq.conditions.BaseCondition

A double-ended Dirichlet boundary condition: \(u(t_0)=u_0\) and \(u(t_1)=u_1\).

Parameters:
  • t_0 (float) – The initial time.
  • u_0 (float) – The initial value of \(u\). \(u(t_0)=u_0\).
  • t_1 (float) – The final time.
  • u_1 (float) – The initial value of \(u\). \(u(t_1)=u_1\).
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, t)

Re-parameterizes outputs such that the Dirichlet condition is satisfied on both ends of the domain.

The re-parameterization is \(\displaystyle u(t)=(1-\tilde{t})u_0+\tilde{t}u_1+\left(1-e^{(1-\tilde{t})\tilde{t}}\right)\mathrm{ANN}(t)\), where \(\displaystyle \tilde{t} = \frac{t-t_0}{t_1-t_0}\) and \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • t (torch.Tensor) – Input to the neural network; i.e., sampled time-points or another independent variable.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.DirichletBVP2D(x_min, x_min_val, x_max, x_max_val, y_min, y_min_val, y_max, y_max_val)

Bases: neurodiffeq.conditions.BaseCondition

An Dirichlet boundary condition on the boundary of \([x_0, x_1] \times [y_0, y_1]\), where

  • \(u(x_0, y) = f_0(y)\);
  • \(u(x_1, y) = f_1(y)\);
  • \(u(x, y_0) = g_0(x)\);
  • \(u(x, y_1) = g_1(x)\).
Parameters:
  • x_min (float) – The lower bound of x, the \(x_0\).
  • x_min_val (callable) – The boundary value on \(x = x_0\), i.e. \(f_0(y)\).
  • x_max (float) – The upper bound of x, the \(x_1\).
  • x_max_val (callable) – The boundary value on \(x = x_1\), i.e. \(f_1(y)\).
  • y_min (float) – The lower bound of y, the \(y_0\).
  • y_min_val (callable) – The boundary value on \(y = y_0\), i.e. \(g_0(x)\).
  • y_max (float) – The upper bound of y, the \(y_1\).
  • y_max_val (callable) – The boundary value on \(y = y_1\), i.e. \(g_1(x)\).
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, x, y)

Re-parameterizes outputs such that the Dirichlet condition is satisfied on all four sides of the domain.

The re-parameterization is \(\displaystyle u(x,y)=A(x,y) +\tilde{x}\big(1-\tilde{x}\big)\tilde{y}\big(1-\tilde{y}\big)\mathrm{ANN}(x,y)\), where

\(\displaystyle \begin{align*} A(x,y)=&\big(1-\tilde{x}\big)f_0(y)+\tilde{x}f_1(y) \\ &+\big(1-\tilde{y}\big)\Big(g_0(x)-\big(1-\tilde{x}\big)g_0(x_0)+\tilde{x}g_0(x_1)\Big) \\ &+\tilde{y}\Big(g_1(x)-\big(1-\tilde{x}\big)g_1(x_0)+\tilde{x}g_1(x_1)\Big) \end{align*}\)

\(\displaystyle\tilde{x}=\frac{x-x_0}{x_1-x_0}\),

\(\displaystyle\tilde{y}=\frac{y-y_0}{y_1-y_0}\),

and \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • x (torch.Tensor) – \(x\)-coordinates of inputs to the neural network; i.e., the sampled \(x\)-coordinates.
  • y (torch.Tensor) – \(y\)-coordinates of inputs to the neural network; i.e., the sampled \(y\)-coordinates.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.DirichletBVPSpherical(r_0, f, r_1=None, g=None)

Bases: neurodiffeq.conditions.BaseCondition

The Dirichlet boundary condition for the interior and exterior boundary of the sphere, where the interior boundary is not necessarily a point. The conditions are:

  • \(u(r_0,\theta,\phi)=f(\theta,\phi)\)
  • \(u(r_1,\theta,\phi)=g(\theta,\phi)\)
Parameters:
  • r_0 (float) – The radius of the interior boundary. When \(r_0 = 0\), the interior boundary collapses to a single point (center of the ball).
  • f (callable) – The value of \(u\) on the interior boundary. \(u(r_0, \theta, \phi)=f(\theta, \phi)\).
  • r_1 (float or None) – The radius of the exterior boundary. If set to None, g must also be None.
  • g (callable or None) – The value of \(u\) on the exterior boundary. \(u(r_1, \theta, \phi)=g(\theta, \phi)\). If set to None, r_1 must also be set to None.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, r, theta, phi)

Re-parameterizes outputs such that the Dirichlet condition is satisfied on both spherical boundaries.

  • If both inner and outer boundaries are specified \(u(r_0,\theta,\phi)=f(\theta,\phi)\) and \(u(r_1,\theta,\phi)=g(\theta,\phi)\):

    The re-parameterization is \(\big(1-\tilde{r}\big)f(\theta,\phi)+\tilde{r}g(\theta,\phi) +\Big(1-e^{\tilde{r}(1-{\tilde{r}})}\Big)\mathrm{ANN}(r, \theta, \phi)\) where \(\displaystyle\tilde{r}=\frac{r-r_0}{r_1-r_0}\);

  • If only one boundary is specified (inner or outer) \(u(r_0,\theta,\phi)=f(\theta,\phi)\)

    The re-parameterization is \(f(\theta,\phi)+\Big(1-e^{-|r-r_0|}\Big)\mathrm{ANN}(r, \theta, \phi)\);

where \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • r (torch.Tensor) – The radii (or \(r\)-component) of the inputs to the network.
  • theta (torch.Tensor) – The co-latitudes (or \(\theta\)-component) of the inputs to the network.
  • phi (torch.Tensor) – The longitudes (or \(\phi\)-component) of the inputs to the network.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.DirichletBVPSphericalBasis(r_0, R_0, r_1=None, R_1=None, max_degree=None)

Bases: neurodiffeq.conditions.BaseCondition

Similar to neurodiffeq.conditions.DirichletBVPSpherical. The only difference is this condition is enforced on a neural net that only takes in \(r\) and returns the spherical harmonic coefficients R(r). We constrain the coefficients \(R_k(r)\) in \(u(r,\theta,\phi)=\sum_{k}R_k(r)Y_k(\theta,\phi)\), where \(\big\{Y_k(\theta,\phi)\big\}_{k=1}^{K}\) can be any spherical function basis. A recommended choice is the real spherical harmonics \(Y_l^m(\theta,\phi)\), where \(l\) is the degree of the spherical harmonics and \(m\) is the order of the spherical harmonics.

The boundary conditions are: \(\mathbf{R}(r_0)=\mathbf{R}_0\) and \(\mathbf{R}(r_1)=\mathbf{R}_1\), where \(\mathbf{R}\) is a vector whose components are \(\big\{R_k\big\}_{k=1}^{K}\)

Parameters:
  • r_0 (float) – The radius of the interior boundary. When r_0 = 0, the interior boundary is collapsed to a single point (center of the ball).
  • R_0 (torch.Tensor) – The value of harmonic coefficients \(\mathbf{R}\) on the interior boundary. \(\mathbf{R}(r_0)=\mathbf{R}_0\).
  • r_1 (float or None) – The radius of the exterior boundary. If set to None, R_1 must also be None
  • R_1 (torch.Tensor) – The value of harmonic coefficients \(\mathbf{R}\) on the exterior boundary. \(\mathbf{R}(r_1)=\mathbf{R}_1\).
  • max_degree (int) – [DEPRECATED] Highest degree when using spherical harmonics.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, r)

Re-parameterizes outputs such that the Dirichlet condition is satisfied on both spherical boundaries.

  • If both inner and outer boundaries are specified \(\mathbf{R}(r_0,\theta,\phi)=\mathbf{R}_0\) and \(\mathbf{R}(r_1,\theta,\phi)=\mathbf{R}_1\).

    The re-parameterization is \(\big(1-\tilde{r}\big)\mathbf{R}_0+\tilde{r}\mathbf{R}_1 +\Big(1-e^{\tilde{r}(1-{\tilde{r}})}\Big)\mathrm{ANN}(r)\) where \(\displaystyle\tilde{r}=\frac{r-r_0}{r_1-r_0}\);

  • If only one boundary is specified (inner or outer) \(\mathbf{R}(r_0,\theta,\phi)=\mathbf{R_0}\)

    The re-parameterization is \(\mathbf{R}_0+\Big(1-e^{-|r-r_0|}\Big)\mathrm{ANN}(r)\);

where \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • r (torch.Tensor) – The radii (or \(r\)-component) of the inputs to the network.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.DoubleEndedBVP1D(x_min, x_max, x_min_val=None, x_min_prime=None, x_max_val=None, x_max_prime=None)

Bases: neurodiffeq.conditions.BaseCondition

A boundary condition on a 1-D range where \(x\in[x_0, x_1]\). The conditions should have the following parts:

  • \(u(x_0)=u_0\) or \(u'_x(x_0)=u'_0\),
  • \(u(x_1)=u_1\) or \(u'_x(x_1)=u'_1\),

where \(\displaystyle u'_x=\frac{\partial u}{\partial x}\).

Parameters:
  • x_min (float) – The lower bound of x, the \(x_0\).
  • x_max (float) – The upper bound of x, the \(x_1\).
  • x_min_val (callable, optional) – The Dirichlet boundary condition when \(x = x_0\), the \(u(x_0)\), defaults to None.
  • x_min_prime (callable, optional) – The Neumann boundary condition when \(x = x_0\), the \(u'_x(x_0)\), defaults to None.
  • x_max_val (callable, optional) – The Dirichlet boundary condition when \(x = x_1\), the \(u(x_1)\), defaults to None.
  • x_max_prime (callable, optional) – The Neumann boundary condition when \(x = x_1\), the \(u'_x(x_1)\), defaults to None.
Raises:

NotImplementedError – When unimplemented boundary conditions are configured.

Note

This condition cannot be passed to neurodiffeq.conditions.EnsembleCondition unless both boundaries uses Dirichlet conditions (by specifying only x_min_val and x_max_val) and force is set to True in EnsembleCondition’s constructor.

enforce(net, x)

Enforces this condition on a network with inputs x.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • x (torch.Tensor) – The \(x\)-coordinates of the samples; i.e., the spatial coordinates.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

Note

This method overrides the default method of neurodiffeq.conditions.BaseCondition . In general, you should avoid overriding enforce when implementing custom boundary conditions.

parameterize(u, x, *additional_tensors)

Re-parameterizes outputs such that the boundary conditions are satisfied.

There are four boundary conditions that are currently implemented:

  • For Dirichlet-Dirichlet boundary condition \(u(x_0)=u_0\) and \(u(x_1)=u_1\):

    The re-parameterization is \(\displaystyle u(x)=A+\tilde{x}\big(1-\tilde{x}\big)\mathrm{ANN}(x)\), where \(\displaystyle A=\big(1-\tilde{x}\big)u_0+\big(\tilde{x}\big)u_1\).

  • For Dirichlet-Neumann boundary condition \(u(x_0)=u_0\) and \(u'_x(x_1)=u'_1\):

    The re-parameterization is \(\displaystyle u(x)=A(x)+\tilde{x}\Big(\mathrm{ANN}(x)-\mathrm{ANN}(x_1)+x_0 - \big(x_1-x_0\big)\mathrm{ANN}'_x(x_1)\Big)\), where \(\displaystyle A(x)=\big(1-\tilde{x}\big)u_0+\frac{1}{2}\tilde{x}^2\big(x_1-x_0\big)u'_1\).

  • For Neumann-Dirichlet boundary condition \(u'_x(x_0)=u'_0\) and \(u(x_1)=u_1\):

    The re-parameterization is \(\displaystyle u(x)=A(x)+\big(1-\tilde{x}\big)\Big(\mathrm{ANN}(x)-\mathrm{ANN}(x_0)+x_1 + \big(x_1-x_0\big)\mathrm{ANN}'_x(x_0)\Big)\), where \(\displaystyle A(x)=\tilde{x}u_1-\frac{1}{2}\big(1-\tilde{x}\big)^2\big(x_1-x_0\big)u'_0\).

  • For Neumann-Neumann boundary condition \(u'_x(x_0)=u'_0\) and \(u'_x(x_1)=u'_1\):

    The re-parameterization is \(\displaystyle u(x)=A(x)+\frac{1}{2}\tilde{x}^2\big(\mathrm{ANN}(x)-\mathrm{ANN}(x_1)-\frac{1}{2}\mathrm{ANN}'_x(x_1)\big(x_1-x_0\big)\big), +\frac{1}{2}\big(1-\tilde{x}\big)^2\big(\mathrm{ANN}(x)-\mathrm{ANN}(x_0)+\frac{1}{2}\mathrm{ANN}'_x(x_0)\big(x_1-x_0\big)\big)\), where \(\displaystyle A(x)=\frac{1}{2}\tilde{x}^2\big(x_1-x_0\big)u'_1 - \frac{1}{2}\big(1-\tilde{x}\big)^2\big(x_1-x_0\big)u'_0\).

Notations:

  • \(\displaystyle\tilde{x}=\frac{x-x_0}{x_1-x_0}\),
  • \(\displaystyle\mathrm{ANN}\) is the neural network,
  • and \(\displaystyle\mathrm{ANN}'_x=\frac{\partial ANN}{\partial x}\).
Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • x (torch.Tensor) – The \(x\)-coordinates of the samples; i.e., the spatial coordinates.
  • additional_tensors (torch.Tensor) – additional tensors that will be passed by enforce
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.EnsembleCondition(*sub_conditions, force=False)

Bases: neurodiffeq.conditions.BaseCondition

An ensemble condition that enforces sub-conditions on individual output units of the networks.

Parameters:
  • sub_conditions (BaseCondition) – Condition(s) to be ensemble’d.
  • force (bool) – Whether or not to force ensembl’ing even when .enforce is overridden in one of the sub-conditions.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, *input_tensors)

Re-parameterizes each column in output_tensor individually, using its corresponding sub-condition. This is useful when solving differential equations with a single, multi-output network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network. Number of units (.shape[1]) must equal number of sub-conditions.
  • input_tensors (torch.Tensor) – Inputs to the neural network; i.e., sampled coordinates; i.e., independent variables.
Returns:

The column-wise re-parameterized network output, concatenated across columns so that it’s still one tensor.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.IBVP1D(x_min, x_max, t_min, t_min_val, x_min_val=None, x_min_prime=None, x_max_val=None, x_max_prime=None)

Bases: neurodiffeq.conditions.BaseCondition

An initial & boundary condition on a 1-D range where \(x\in[x_0, x_1]\) and time starts at \(t_0\). The conditions should have the following parts:

  • \(u(x,t_0)=u_0(x)\),
  • \(u(x_0,t)=g(t)\) or \(u'_x(x_0,t)=p(t)\),
  • \(u(x_1,t)=h(t)\) or \(u'_x(x_1,t)=q(t)\),

where \(\displaystyle u'_x=\frac{\partial u}{\partial x}\).

Parameters:
  • x_min (float) – The lower bound of x, the \(x_0\).
  • x_max (float) – The upper bound of x, the \(x_1\).
  • t_min (float) – The initial time, the \(t_0\).
  • t_min_val (callable) – The initial condition, the \(u_0(x)\).
  • x_min_val (callable, optional) – The Dirichlet boundary condition when \(x = x_0\), the \(u(x_0, t)\), defaults to None.
  • x_min_prime (callable, optional) – The Neumann boundary condition when \(x = x_0\), the \(u'_x(x_0, t)\), defaults to None.
  • x_max_val (callable, optional) – The Dirichlet boundary condition when \(x = x_1\), the \(u(x_1, t)\), defaults to None.
  • x_max_prime (callable, optional) – The Neumann boundary condition when \(x = x_1\), the \(u'_x(x_1, t)\), defaults to None.
Raises:

NotImplementedError – When unimplemented boundary conditions are configured.

Note

This condition cannot be passed to neurodiffeq.conditions.EnsembleCondition unless both boundaries uses Dirichlet conditions (by specifying only x_min_val and x_max_val) and force is set to True in EnsembleCondition’s constructor.

enforce(net, x, t)

Enforces this condition on a network with inputs x and t

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • x (torch.Tensor) – The \(x\)-coordinates of the samples; i.e., the spatial coordinates.
  • t (torch.Tensor) – The \(t\)-coordinates of the samples; i.e., the temporal coordinates.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

Note

This method overrides the default method of neurodiffeq.conditions.BaseCondition . In general, you should avoid overriding enforce when implementing custom boundary conditions.

parameterize(u, x, t, *additional_tensors)

Re-parameterizes outputs such that the initial and boundary conditions are satisfied.

The Initial condition is always \(u(x,t_0)=u_0(x)\). There are four boundary conditions that are currently implemented:

  • For Dirichlet-Dirichlet boundary condition \(u(x_0,t)=g(t)\) and \(u(x_1,t)=h(t)\):

    The re-parameterization is \(\displaystyle u(x,t)=A(x,t)+\tilde{x}\big(1-\tilde{x}\big)\Big(1-e^{-\tilde{t}}\Big)\mathrm{ANN}(x,t)\), where \(\displaystyle A(x,t)=u_0(x)+ \tilde{x}\big(h(t)-h(t_0)\big)+\big(1-\tilde{x}\big)\big(g(t)-g(t_0)\big)\).

  • For Dirichlet-Neumann boundary condition \(u(x_0,t)=g(t)\) and \(u'_x(x_1, t)=q(t)\):

    The re-parameterization is \(\displaystyle u(x,t)=A(x,t)+\tilde{x}\Big(1-e^{-\tilde{t}}\Big) \Big(\mathrm{ANN}(x,t)-\big(x_1-x_0\big)\mathrm{ANN}'_x(x_1,t)-\mathrm{ANN}(x_1,t)\Big)\), where \(\displaystyle A(x,t)=u_0(x)+\big(x-x_0\big)\big(q(t)-q(t_0)\big)+\big(g(t)-g(t_0)\big)\).

  • For Neumann-Dirichlet boundary condition \(u'_x(x_0,t)=p(t)\) and \(u(x_1, t)=h(t)\):

    The re-parameterization is \(\displaystyle u(x,t)=A(x,t)+\big(1-\tilde{x}\big)\Big(1-e^{-\tilde{t}}\Big) \Big(\mathrm{ANN}(x,t)-\big(x_1-x_0\big)\mathrm{ANN}'_x(x_0,t)-\mathrm{ANN}(x_0,t)\Big)\), where \(\displaystyle A(x,t)=u_0(x)+\big(x_1-x\big)\big(p(t)-p(t_0)\big)+\big(h(t)-h(t_0)\big)\).

  • For Neumann-Neumann boundary condition \(u'_x(x_0,t)=p(t)\) and \(u'_x(x_1, t)=q(t)\)

    The re-parameterization is \(\displaystyle u(x,t)=A(x,t)+\left(1-e^{-\tilde{t}}\right) \Big( \mathrm{ANN}(x,t)-\big(x-x_0\big)\mathrm{ANN}'_x(x_0,t) +\frac{1}{2}\tilde{x}^2\big(x_1-x_0\big) \big(\mathrm{ANN}'_x(x_0,t)-\mathrm{ANN}'_x(x_1,t)\big) \Big)\), where \(\displaystyle A(x,t)=u_0(x) -\frac{1}{2}\big(1-\tilde{x}\big)^2\big(x_1-x_0\big)\big(p(t)-p(t_0)\big) +\frac{1}{2}\tilde{x}^2\big(x_1-x_0\big)\big(q(t)-q(t_0)\big)\).

Notations:

  • \(\displaystyle\tilde{t}=\frac{t-t_0}{t_1-t_0}\),
  • \(\displaystyle\tilde{x}=\frac{x-x_0}{x_1-x_0}\),
  • \(\displaystyle\mathrm{ANN}\) is the neural network,
  • and \(\displaystyle\mathrm{ANN}'_x=\frac{\partial ANN}{\partial x}\).
Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • x (torch.Tensor) – The \(x\)-coordinates of the samples; i.e., the spatial coordinates.
  • t (torch.Tensor) – The \(t\)-coordinates of the samples; i.e., the temporal coordinates.
  • additional_tensors (torch.Tensor) – additional tensors that will be passed by enforce
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.IVP(t_0, u_0=None, u_0_prime=None)

Bases: neurodiffeq.conditions.BaseCondition

An initial value problem of one of the following forms:

  • Dirichlet condition: \(u(t_0)=u_0\).
  • Neumann condition: \(\displaystyle\frac{\partial u}{\partial t}\bigg|_{t = t_0} = u_0'\).
Parameters:
  • t_0 (float) – The initial time.
  • u_0 (float) – The initial value of \(u\). \(u(t_0)=u_0\).
  • u_0_prime (float, optional) – The initial derivative of \(u\) w.r.t. \(t\). \(\displaystyle\frac{\partial u}{\partial t}\bigg|_{t = t_0} = u_0'\). Defaults to None.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, t)

Re-parameterizes outputs such that the Dirichlet/Neumann condition is satisfied.

  • For Dirichlet condition, the re-parameterization is \(\displaystyle u(t) = u_0 + \left(1 - e^{-(t-t_0)}\right) \mathrm{ANN}(t)\) where \(\mathrm{ANN}\) is the neural network.
  • For Neumann condition, the re-parameterization is \(\displaystyle u(t) = u_0 + (t-t_0) u'_0 + \left(1 - e^{-(t-t_0)}\right)^2 \mathrm{ANN}(t)\) where \(\mathrm{ANN}\) is the neural network.
Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • t (torch.Tensor) – Input to the neural network; i.e., sampled time-points; i.e., independent variables.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.InfDirichletBVPSpherical(r_0, f, g, order=1)

Bases: neurodiffeq.conditions.BaseCondition

Similar to neurodiffeq.conditions.DirichletBVPSpherical. but with \(r_1\to+\infty\). Specifically,

  • \(\displaystyle u(r_0,\theta,\phi)=f(\theta,\phi)\),
  • \(\lim_{r\to+\infty}u(r,\theta,\phi)=g(\theta,\phi)\).
Parameters:
  • r_0 (float) – The radius of the interior boundary. When \(r_0=0\), the interior boundary collapses to a single point (center of the ball).
  • f (callable) – The value of \(u\) on the interior boundary. \(u(r_0,\theta,\phi)=f(\theta,\phi)\).
  • g (callable) – The value of \(u\) at infinity. \(\lim_{r\to+\infty}u(r,\theta,\phi)=g(\theta,\phi)\).
  • order (int or float) – The smallest \(k\) such that \(\lim_{r\to+\infty}u(r,\theta,\phi)e^{-kr}=0\). Defaults to 1.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, r, theta, phi)

Re-parameterizes outputs such that the Dirichlet condition is satisfied both at \(r_0\) and infinity. The re-parameterization is

\(\begin{align} u(r,\theta,\phi)= &e^{-k(r-r_0)}f(\theta,\phi)\\ &+\tanh{\big(r-r_0\big)}g(\theta,\phi)\\ &+e^{-k(r-r_0)}\tanh{\big(r-r_0\big)}\mathrm{ANN}(r,\theta,\phi) \end{align}\),

where \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • r (torch.Tensor) – The radii (or \(r\)-component) of the inputs to the network.
  • theta (torch.Tensor) – The co-latitudes (or \(\theta\)-component) of the inputs to the network.
  • phi (torch.Tensor) – The longitudes (or \(\phi\)-component) of the inputs to the network.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.InfDirichletBVPSphericalBasis(r_0, R_0, R_inf, order=1, max_degree=None)

Bases: neurodiffeq.conditions.BaseCondition

Similar to neurodiffeq.conditions.InfDirichletBVPSpherical. The only difference is this condition is enforced on a neural net that only takes in \(r\) and returns the spherical harmonic coefficients R(r). We constrain the coefficients \(R_k(r)\) in \(u(r,\theta,\phi)=\sum_{k}R_k(r)Y_k(\theta,\phi)\), where \(\big\{Y_k(\theta,\phi)\big\}_{k=1}^{K}\) can be any spherical function basis. A recommended choice is the real spherical harmonics \(Y_l^m(\theta,\phi)\), where \(l\) is the degree of the spherical harmonics and \(m\) is the order of the spherical harmonics.

The boundary conditions are: \(\mathbf{R}(r_0)=\mathbf{R}_0\) and \(\lim_{r_0\to+\infty}\mathbf{R}(r)=\mathbf{R}_1\), where \(\mathbf{R}\) is a vector whose components are \(\big\{R_k\big\}_{k=1}^{K}\).

Parameters:
  • r_0 (float) – The radius of the interior boundary. When r_0 = 0, the interior boundary is collapsed to a single point (center of the ball).
  • R_0 (torch.Tensor) – The value of harmonic coefficients \(R\) on the interior boundary. \(R(r_0)=R_0\).
  • R_inf (torch.Tensor) – The value of harmonic coefficients \(R\) at infinity. \(\lim_{r\to+\infty}R(r)=R_\infty\).
  • order (int or float) – The smallest \(k\) that guarantees \(\lim_{r \to +\infty} R(r) e^{-k r} = \bf 0\). Defaults to 1.
  • max_degree (int) – [DEPRECATED] Highest degree when using spherical harmonics.
enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, r)

Re-parameterizes outputs such that the Dirichlet condition is satisfied at both \(r_0\) and infinity.

The re-parameterization is

\(\begin{align} u(r,\theta,\phi)= &e^{-k(r-r_0)}\mathbf{R}_0\\ &+\tanh{\big(r-r_0\big)}\mathbf{R}_1\\ &+e^{-k(r-r_0)}\tanh{\big(r-r_0\big)}\mathrm{ANN}(r) \end{align}\),

where \(\mathrm{ANN}\) is the neural network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • r (torch.Tensor) – The radii (or \(r\)-component) of the inputs to the network.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.IrregularBoundaryCondition

Bases: neurodiffeq.conditions.BaseCondition

enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

in_domain(*coordinates)

Given the coordinates (numpy.ndarray), the methods returns an boolean array indicating whether the points lie within the domain.

Parameters:coordinates (numpy.ndarray) – Input tensors, each with shape (n_samples, 1).
Returns:Whether each point lies within the domain.
Return type:numpy.ndarray

Note

  • This method is meant to be used by monitors for irregular domain visualization.
parameterize(output_tensor, *input_tensors)

Re-parameterizes output(s) of a network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • input_tensors (torch.Tensor) – Inputs to the neural network; i.e., sampled coordinates; i.e., independent variables.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

Note

This method is abstract for BaseCondition

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.conditions.NoCondition

Bases: neurodiffeq.conditions.BaseCondition

A polymorphic condition where no re-parameterization will be performed.

Note

This condition is called polymorphic because it can be enforced on networks of arbitrary input/output sizes.

enforce(net, *coordinates)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

parameterize(output_tensor, *input_tensors)

Performs no re-parameterization, or identity parameterization, in this case.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • input_tensors (torch.Tensor) – Inputs to the neural network; i.e., sampled coordinates; i.e., independent variables.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

neurodiffeq.solvers

class neurodiffeq.solvers.BaseSolution(nets, conditions)

Bases: abc.ABC

A solution to a PDE/ODE (system).

Parameters:
  • nets (list[torch.nn.Module] or torch.nn.Module) –

    The neural networks that approximate the PDE/ODE solution.

    • If nets is a list of torch.nn.Module, it should have the same length with conditions
    • If nets is a single torch.nn.Module, it should have as many output units as length of conditions
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – A list of conditions that should be enforced on the PDE/ODE solution. conditions should have a length equal to the number of dependent variables in the ODE/PDE system.
class neurodiffeq.solvers.BaseSolver(diff_eqs, conditions, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, loss_fn=None, n_batches_train=1, n_batches_valid=4, metrics=None, n_input_units=None, n_output_units=None, shuffle=None, batch_size=None)

Bases: abc.ABC, neurodiffeq.solvers_utils.PretrainedSolver

A class for solving ODE/PDE systems.

Parameters:
  • diff_eqs (callable) – The differential equation system to solve, which maps a tuple of coordinates to a tuple of ODE/PDE residuals. Both the coordinates and ODE/PDE residuals must have shape (-1, 1).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – List of boundary conditions for each target function.
  • nets (list[torch.nn.Module], optional) – List of neural networks for parameterized solution. If provided, length must equal that of conditions.
  • train_generator (neurodiffeq.generators.BaseGenerator, required) – A generator for sampling training points. It must provide a .get_examples() method and a .size field.
  • valid_generator (neurodiffeq.generators.BaseGenerator, required) – A generator for sampling validation points. It must provide a .get_examples() method and a .size field.
  • analytic_solutions (callable, optional) – [DEPRECATED] Pass metrics instead. The analytical solutions to be compared with neural net solutions. It maps a tuple of coordinates to a tuple of function values. The output shape should match that of networks.
  • optimizer (torch.nn.optim.Optimizer, optional) – The optimizer to be used for training.
  • loss_fn (str or torch.nn.moduesl.loss._Loss or callable) –

    The loss function used for training.

    • If a str, must be present in the keys of neurodiffeq.losses._losses.
    • If a torch.nn.modules.loss._Loss instance, just pass the instance.
    • If any other callable, it must map A) a residual tensor (shape (n_points, n_equations)), B) a function values tuple (length n_funcs, each element a tensor of shape (n_points, 1)), and C) a coordinate values tuple (length n_coords, each element a tensor of shape (n_coords, 1) to a tensor of empty shape (i.e. a scalar). The returned tensor must be connected to the computational graph, so that backpropagation can be performed.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • metrics (dict, optional) –

    Additional metrics to be logged (besides loss). metrics should be a dict where

    • Keys are metric names (e.g. ‘analytic_mse’);
    • Values are functions (callables) that computes the metric value. These functions must accept the same input as the differential equation diff_eq.
  • n_input_units (int, required) – Number of input units for each neural network. Ignored if nets is specified.
  • n_output_units (int, required) – Number of output units for each neural network. Ignored if nets is specified.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
additional_loss(residual, funcs, coords)

Additional loss terms for training. This method is to be overridden by subclasses. This method can use any of the internal variables: self.nets, self.conditions, self.global_epoch, etc.

Parameters:
  • residual (torch.Tensor) – Residual tensor of differential equation. It has shape (N_SAMPLES, N_EQUATIONS)
  • funcs (List[torch.Tensor]) – Outputs of the networks after parameterization. There are len(nets) entries in total. Each entry is a tensor of shape (N_SAMPLES, N_OUTPUT_UNITS).
  • coords (List[torch.Tensor]) – Inputs to the networks; a.k.a. the spatio-temporal coordinates of the system. There are N_COORDS entries in total. Each entry is a tensor of shape (N_SAMPLES, 1).
Returns:

Additional loss. Must be a torch.Tensor of empty shape (scalar).

Return type:

torch.Tensor

compute_func_val(net, cond, *coordinates)

Compute the function value evaluated on the points specified by coordinates.

Parameters:
  • net (torch.nn.Module) – The network to be parameterized and evaluated.
  • cond (neurodiffeq.conditions.BaseCondition) – The condition (a.k.a. parameterization) for the network.
  • coordinates (tuple[torch.Tensor]) – A tuple of coordinate components, each with shape = (-1, 1).
Returns:

Function values at the sampled points.

Return type:

torch.Tensor

fit(max_epochs, callbacks=(), tqdm_file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, **kwargs)

Run multiple epochs of training and validation, update best loss at the end of each epoch.

If callbacks is passed, callbacks are run, one at a time, after training, validating and updating best model.

Parameters:
  • max_epochs (int) – Number of epochs to run.
  • callbacks – A list of callback functions. Each function should accept the solver instance itself as its only argument.
  • tqdm_file (io.StringIO or _io.TextIOWrapper) – File to write tqdm progress bar. If set to None, tqdm is not used at all. Defaults to sys.stderr.
Rtype callbacks:
 

list[callable]

Note

  1. This method does not return solution, which is done in the .get_solution() method.
  2. A callback cb(solver) can set solver._stop_training to True to perform early stopping.
get_internals(var_names=None, return_type='list')

Return internal variable(s) of the solver

  • If var_names == ‘all’, return all internal variables as a dict.
  • If var_names is single str, return the corresponding variables.
  • If var_names is a list and return_type == ‘list’, return corresponding internal variables as a list.
  • If var_names is a list and return_type == ‘dict’, return a dict with keys in var_names.
Parameters:
  • var_names (str or list[str]) – An internal variable name or a list of internal variable names.
  • return_type (str) – {‘list’, ‘dict’}; Ignored if var_names is a string.
Returns:

A single variable, or a list/dict of internal variables as indicated above.

Return type:

list or dict or any

get_residuals(*coords, to_numpy=False, best=True, no_reshape=False)

Get the residuals of the differential equation (or system of differential equations) evaluated at given points.

Parameters:
  • coords (tuple[torch.Tensor] or tuple[np.ndarray]) – The coordinate values where the residual(s) shall be evaluated. If numpy arrays are passed, the method implicitly creates torch tensors with corresponding values.
  • to_numpy (bool) – Whether to return numpy arrays. Defaults to False.
  • best (bool) – If set to False, the network from the most recent epoch will be used to evaluate the residuals. If set to True, the network from the epoch with the lowest validation loss will be used to evaluate the residuals. Defaults to True.
  • no_reshape (bool) – If set to True, no reshaping will be performed on output. Defaults to False.
Returns:

The residuals evaluated at given points. If there is only one equation in the differential equation system, a single torch tensor (or numpy array) will be returned. If there are multiple equations, a list of torch tensors (or numpy arrays) will be returned. The returned shape will be the same as the first input coordinate, unless no_reshape is set to True. Note that the return value will always be torch tensors (even if coords are numpy arrays) unless to_numpy is explicitly set to True.

Return type:

list[torch.Tensor or numpy.array] or torch.Tensor or numpy.array

get_solution(copy=True, best=True)

Get a (callable) solution object. See this usage example:

solution = solver.get_solution()
point_coords = train_generator.get_examples()
value_at_points = solution(point_coords)
Parameters:
  • copy (bool) – Whether to make a copy of the networks so that subsequent training doesn’t affect the solution; Defaults to True.
  • best (bool) – Whether to return the solution with lowest loss instead of the solution after the last epoch. Defaults to True.
Returns:

A solution object which can be called. To evaluate the solution on certain points, you should pass the coordinates vector(s) to the returned solution.

Return type:

BaseSolution

global_epoch

Global epoch count, always equal to the length of train loss history.

Returns:Number of training epochs that have been run.
Return type:int
run_train_epoch()

Run a training epoch, update history, and perform gradient descent.

run_valid_epoch()

Run a validation epoch and update history.

class neurodiffeq.solvers.BundleSolution1D(nets, conditions)

Bases: neurodiffeq.solvers.BaseSolution

class neurodiffeq.solvers.BundleSolver1D(ode_system, conditions, t_min, t_max, theta_min=None, theta_max=None, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, loss_fn=None, n_batches_train=1, n_batches_valid=4, metrics=None, n_output_units=1, batch_size=None, shuffle=None)

Bases: neurodiffeq.solvers.BaseSolver

A solver class for solving ODEs (single-input differential equations) , or a bundle of ODEs for different values of its parameters and/or conditions

Parameters:
  • ode_system (callable) – The ODE system to solve, which maps a torch.Tensor or a tuple of torch.Tensors, to a tuple of ODE residuals, both the input and output must have shape (n_samples, 1).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – List of conditions for each target function.
  • t_min (float, optional) – Lower bound of input (start time). Ignored if train_generator and valid_generator are both set.
  • t_max (float, optional) – Upper bound of input (start time). Ignored if train_generator and valid_generator are both set.
  • theta_min (float or tuple, optional) – Lower bound of input (parameters and/or conditions). If conditions are included in the bundle, the order should match the one inferred by the values of the bundle_conditions input in the neurodiffeq.conditions.BundleIVP. Defaults to None. Ignored if train_generator and valid_generator are both set.
  • theta_max (float or tuple, optional) – Upper bound of input (parameters and/or conditions). If conditions are included in the bundle, the order should match the one inferred by the values of the bundle_conditions input in the neurodiffeq.conditions.BundleIVP. Defaults to None. Ignored if train_generator and valid_generator are both set.
  • nets (list[torch.nn.Module], optional) – List of neural networks for parameterized solution. If provided, length of nets must equal that of conditions
  • train_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling training points, which must provide a .get_examples() method and a .size field. train_generator must be specified if t_min and t_max are not set.
  • valid_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling validation points, which must provide a .get_examples() method and a .size field. valid_generator must be specified if t_min and t_max are not set.
  • analytic_solutions (callable, optional) – Analytical solutions to be compared with neural net solutions. It maps a torch.Tensor to a tuple of function values. Output shape should match that of nets.
  • optimizer (torch.nn.optim.Optimizer, optional) – Optimizer to be used for training. Defaults to a torch.optim.Adam instance that trains on all parameters of nets.
  • loss_fn (str or torch.nn.moduesl.loss._Loss or callable) –

    The loss function used for training.

    • If a str, must be present in the keys of neurodiffeq.losses._losses.
    • If a torch.nn.modules.loss._Loss instance, just pass the instance.
    • If any other callable, it must map A) a residual tensor (shape (n_points, n_equations)), B) a function values tuple (length n_funcs, each element a tensor of shape (n_points, 1)), and C) a coordinate values tuple (length n_coords, each element a tensor of shape (n_coords, 1) to a tensor of empty shape (i.e. a scalar). The returned tensor must be connected to the computational graph, so that backpropagation can be performed.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • metrics (dict[str, callable], optional) –

    Additional metrics to be logged (besides loss). metrics should be a dict where

    • Keys are metric names (e.g. ‘analytic_mse’);
    • Values are functions (callables) that computes the metric value. These functions must accept the same input as the differential equation ode_system.
  • n_output_units (int, optional) – Number of output units for each neural network. Ignored if nets is specified. Defaults to 1.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
additional_loss(residual, funcs, coords)

Additional loss terms for training. This method is to be overridden by subclasses. This method can use any of the internal variables: self.nets, self.conditions, self.global_epoch, etc.

Parameters:
  • residual (torch.Tensor) – Residual tensor of differential equation. It has shape (N_SAMPLES, N_EQUATIONS)
  • funcs (List[torch.Tensor]) – Outputs of the networks after parameterization. There are len(nets) entries in total. Each entry is a tensor of shape (N_SAMPLES, N_OUTPUT_UNITS).
  • coords (List[torch.Tensor]) – Inputs to the networks; a.k.a. the spatio-temporal coordinates of the system. There are N_COORDS entries in total. Each entry is a tensor of shape (N_SAMPLES, 1).
Returns:

Additional loss. Must be a torch.Tensor of empty shape (scalar).

Return type:

torch.Tensor

compute_func_val(net, cond, *coordinates)

Compute the function value evaluated on the points specified by coordinates.

Parameters:
  • net (torch.nn.Module) – The network to be parameterized and evaluated.
  • cond (neurodiffeq.conditions.BaseCondition) – The condition (a.k.a. parameterization) for the network.
  • coordinates (tuple[torch.Tensor]) – A tuple of coordinate components, each with shape = (-1, 1).
Returns:

Function values at the sampled points.

Return type:

torch.Tensor

fit(max_epochs, callbacks=(), tqdm_file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, **kwargs)

Run multiple epochs of training and validation, update best loss at the end of each epoch.

If callbacks is passed, callbacks are run, one at a time, after training, validating and updating best model.

Parameters:
  • max_epochs (int) – Number of epochs to run.
  • callbacks – A list of callback functions. Each function should accept the solver instance itself as its only argument.
  • tqdm_file (io.StringIO or _io.TextIOWrapper) – File to write tqdm progress bar. If set to None, tqdm is not used at all. Defaults to sys.stderr.
Rtype callbacks:
 

list[callable]

Note

  1. This method does not return solution, which is done in the .get_solution() method.
  2. A callback cb(solver) can set solver._stop_training to True to perform early stopping.
get_internals(var_names=None, return_type='list')

Return internal variable(s) of the solver

  • If var_names == ‘all’, return all internal variables as a dict.
  • If var_names is single str, return the corresponding variables.
  • If var_names is a list and return_type == ‘list’, return corresponding internal variables as a list.
  • If var_names is a list and return_type == ‘dict’, return a dict with keys in var_names.
Parameters:
  • var_names (str or list[str]) – An internal variable name or a list of internal variable names.
  • return_type (str) – {‘list’, ‘dict’}; Ignored if var_names is a string.
Returns:

A single variable, or a list/dict of internal variables as indicated above.

Return type:

list or dict or any

get_residuals(*coords, to_numpy=False, best=True, no_reshape=False)

Get the residuals of the differential equation (or system of differential equations) evaluated at given points.

Parameters:
  • coords (tuple[torch.Tensor] or tuple[np.ndarray]) – The coordinate values where the residual(s) shall be evaluated. If numpy arrays are passed, the method implicitly creates torch tensors with corresponding values.
  • to_numpy (bool) – Whether to return numpy arrays. Defaults to False.
  • best (bool) – If set to False, the network from the most recent epoch will be used to evaluate the residuals. If set to True, the network from the epoch with the lowest validation loss will be used to evaluate the residuals. Defaults to True.
  • no_reshape (bool) – If set to True, no reshaping will be performed on output. Defaults to False.
Returns:

The residuals evaluated at given points. If there is only one equation in the differential equation system, a single torch tensor (or numpy array) will be returned. If there are multiple equations, a list of torch tensors (or numpy arrays) will be returned. The returned shape will be the same as the first input coordinate, unless no_reshape is set to True. Note that the return value will always be torch tensors (even if coords are numpy arrays) unless to_numpy is explicitly set to True.

Return type:

list[torch.Tensor or numpy.array] or torch.Tensor or numpy.array

get_solution(copy=True, best=True)

Get a (callable) solution object. See this usage example:

solution = solver.get_solution()
point_coords = train_generator.get_examples()
value_at_points = solution(point_coords)
Parameters:
  • copy (bool) – Whether to make a copy of the networks so that subsequent training doesn’t affect the solution; Defaults to True.
  • best (bool) – Whether to return the solution with lowest loss instead of the solution after the last epoch. Defaults to True.
Returns:

A solution object which can be called. To evaluate the solution on certain points, you should pass the coordinates vector(s) to the returned solution.

Return type:

BaseSolution

global_epoch

Global epoch count, always equal to the length of train loss history.

Returns:Number of training epochs that have been run.
Return type:int
run_train_epoch()

Run a training epoch, update history, and perform gradient descent.

run_valid_epoch()

Run a validation epoch and update history.

class neurodiffeq.solvers.GenericSolution(nets, conditions)

Bases: neurodiffeq.solvers.BaseSolution

class neurodiffeq.solvers.GenericSolver(diff_eqs, conditions, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, loss_fn=None, n_batches_train=1, n_batches_valid=4, metrics=None, n_input_units=None, n_output_units=None, shuffle=None, batch_size=None)

Bases: neurodiffeq.solvers.BaseSolver

additional_loss(residual, funcs, coords)

Additional loss terms for training. This method is to be overridden by subclasses. This method can use any of the internal variables: self.nets, self.conditions, self.global_epoch, etc.

Parameters:
  • residual (torch.Tensor) – Residual tensor of differential equation. It has shape (N_SAMPLES, N_EQUATIONS)
  • funcs (List[torch.Tensor]) – Outputs of the networks after parameterization. There are len(nets) entries in total. Each entry is a tensor of shape (N_SAMPLES, N_OUTPUT_UNITS).
  • coords (List[torch.Tensor]) – Inputs to the networks; a.k.a. the spatio-temporal coordinates of the system. There are N_COORDS entries in total. Each entry is a tensor of shape (N_SAMPLES, 1).
Returns:

Additional loss. Must be a torch.Tensor of empty shape (scalar).

Return type:

torch.Tensor

compute_func_val(net, cond, *coordinates)

Compute the function value evaluated on the points specified by coordinates.

Parameters:
  • net (torch.nn.Module) – The network to be parameterized and evaluated.
  • cond (neurodiffeq.conditions.BaseCondition) – The condition (a.k.a. parameterization) for the network.
  • coordinates (tuple[torch.Tensor]) – A tuple of coordinate components, each with shape = (-1, 1).
Returns:

Function values at the sampled points.

Return type:

torch.Tensor

fit(max_epochs, callbacks=(), tqdm_file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, **kwargs)

Run multiple epochs of training and validation, update best loss at the end of each epoch.

If callbacks is passed, callbacks are run, one at a time, after training, validating and updating best model.

Parameters:
  • max_epochs (int) – Number of epochs to run.
  • callbacks – A list of callback functions. Each function should accept the solver instance itself as its only argument.
  • tqdm_file (io.StringIO or _io.TextIOWrapper) – File to write tqdm progress bar. If set to None, tqdm is not used at all. Defaults to sys.stderr.
Rtype callbacks:
 

list[callable]

Note

  1. This method does not return solution, which is done in the .get_solution() method.
  2. A callback cb(solver) can set solver._stop_training to True to perform early stopping.
get_internals(var_names=None, return_type='list')

Return internal variable(s) of the solver

  • If var_names == ‘all’, return all internal variables as a dict.
  • If var_names is single str, return the corresponding variables.
  • If var_names is a list and return_type == ‘list’, return corresponding internal variables as a list.
  • If var_names is a list and return_type == ‘dict’, return a dict with keys in var_names.
Parameters:
  • var_names (str or list[str]) – An internal variable name or a list of internal variable names.
  • return_type (str) – {‘list’, ‘dict’}; Ignored if var_names is a string.
Returns:

A single variable, or a list/dict of internal variables as indicated above.

Return type:

list or dict or any

get_residuals(*coords, to_numpy=False, best=True, no_reshape=False)

Get the residuals of the differential equation (or system of differential equations) evaluated at given points.

Parameters:
  • coords (tuple[torch.Tensor] or tuple[np.ndarray]) – The coordinate values where the residual(s) shall be evaluated. If numpy arrays are passed, the method implicitly creates torch tensors with corresponding values.
  • to_numpy (bool) – Whether to return numpy arrays. Defaults to False.
  • best (bool) – If set to False, the network from the most recent epoch will be used to evaluate the residuals. If set to True, the network from the epoch with the lowest validation loss will be used to evaluate the residuals. Defaults to True.
  • no_reshape (bool) – If set to True, no reshaping will be performed on output. Defaults to False.
Returns:

The residuals evaluated at given points. If there is only one equation in the differential equation system, a single torch tensor (or numpy array) will be returned. If there are multiple equations, a list of torch tensors (or numpy arrays) will be returned. The returned shape will be the same as the first input coordinate, unless no_reshape is set to True. Note that the return value will always be torch tensors (even if coords are numpy arrays) unless to_numpy is explicitly set to True.

Return type:

list[torch.Tensor or numpy.array] or torch.Tensor or numpy.array

get_solution(copy=True, best=True)

Get a (callable) solution object. See this usage example:

solution = solver.get_solution()
point_coords = train_generator.get_examples()
value_at_points = solution(point_coords)
Parameters:
  • copy (bool) – Whether to make a copy of the networks so that subsequent training doesn’t affect the solution; Defaults to True.
  • best (bool) – Whether to return the solution with lowest loss instead of the solution after the last epoch. Defaults to True.
Returns:

A solution object which can be called. To evaluate the solution on certain points, you should pass the coordinates vector(s) to the returned solution.

Return type:

BaseSolution

global_epoch

Global epoch count, always equal to the length of train loss history.

Returns:Number of training epochs that have been run.
Return type:int
run_train_epoch()

Run a training epoch, update history, and perform gradient descent.

run_valid_epoch()

Run a validation epoch and update history.

class neurodiffeq.solvers.Solution1D(nets, conditions)

Bases: neurodiffeq.solvers.BaseSolution

class neurodiffeq.solvers.Solution2D(nets, conditions)

Bases: neurodiffeq.solvers.BaseSolution

class neurodiffeq.solvers.SolutionSpherical(nets, conditions)

Bases: neurodiffeq.solvers.BaseSolution

class neurodiffeq.solvers.SolutionSphericalHarmonics(nets, conditions, max_degree=None, harmonics_fn=None)

Bases: neurodiffeq.solvers.SolutionSpherical

A solution to a PDE (system) in spherical coordinates.

Parameters:
  • nets (list[torch.nn.Module]) – List of networks that takes in radius tensor and outputs the coefficients of spherical harmonics.
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – List of conditions to be enforced on each nets; must be of the same length as nets.
  • harmonics_fn (callable) – Mapping from \(\theta\) and \(\phi\) to basis functions, e.g., spherical harmonics.
  • max_degree (int) – DEPRECATED and SUPERSEDED by harmonics_fn. Highest used for the harmonic basis.
class neurodiffeq.solvers.Solver1D(ode_system, conditions, t_min=None, t_max=None, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, loss_fn=None, n_batches_train=1, n_batches_valid=4, metrics=None, n_output_units=1, batch_size=None, shuffle=None)

Bases: neurodiffeq.solvers.BaseSolver

A solver class for solving ODEs (single-input differential equations)

Parameters:
  • ode_system (callable) – The ODE system to solve, which maps a torch.Tensor to a tuple of ODE residuals, both the input and output must have shape (n_samples, 1).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – List of conditions for each target function.
  • t_min (float, optional) – Lower bound of input (start time). Ignored if train_generator and valid_generator are both set.
  • t_max (float, optional) – Upper bound of input (start time). Ignored if train_generator and valid_generator are both set.
  • nets (list[torch.nn.Module], optional) – List of neural networks for parameterized solution. If provided, length of nets must equal that of conditions
  • train_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling training points, which must provide a .get_examples() method and a .size field. train_generator must be specified if t_min and t_max are not set.
  • valid_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling validation points, which must provide a .get_examples() method and a .size field. valid_generator must be specified if t_min and t_max are not set.
  • analytic_solutions (callable, optional) – Analytical solutions to be compared with neural net solutions. It maps a torch.Tensor to a tuple of function values. Output shape should match that of nets.
  • optimizer (torch.nn.optim.Optimizer, optional) – Optimizer to be used for training. Defaults to a torch.optim.Adam instance that trains on all parameters of nets.
  • loss_fn (str or torch.nn.moduesl.loss._Loss or callable) –

    The loss function used for training.

    • If a str, must be present in the keys of neurodiffeq.losses._losses.
    • If a torch.nn.modules.loss._Loss instance, just pass the instance.
    • If any other callable, it must map A) a residual tensor (shape (n_points, n_equations)), B) a function values tuple (length n_funcs, each element a tensor of shape (n_points, 1)), and C) a coordinate values tuple (length n_coords, each element a tensor of shape (n_coords, 1) to a tensor of empty shape (i.e. a scalar). The returned tensor must be connected to the computational graph, so that backpropagation can be performed.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • metrics (dict[str, callable], optional) –

    Additional metrics to be logged (besides loss). metrics should be a dict where

    • Keys are metric names (e.g. ‘analytic_mse’);
    • Values are functions (callables) that computes the metric value. These functions must accept the same input as the differential equation ode_system.
  • n_output_units (int, optional) – Number of output units for each neural network. Ignored if nets is specified. Defaults to 1.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
additional_loss(residual, funcs, coords)

Additional loss terms for training. This method is to be overridden by subclasses. This method can use any of the internal variables: self.nets, self.conditions, self.global_epoch, etc.

Parameters:
  • residual (torch.Tensor) – Residual tensor of differential equation. It has shape (N_SAMPLES, N_EQUATIONS)
  • funcs (List[torch.Tensor]) – Outputs of the networks after parameterization. There are len(nets) entries in total. Each entry is a tensor of shape (N_SAMPLES, N_OUTPUT_UNITS).
  • coords (List[torch.Tensor]) – Inputs to the networks; a.k.a. the spatio-temporal coordinates of the system. There are N_COORDS entries in total. Each entry is a tensor of shape (N_SAMPLES, 1).
Returns:

Additional loss. Must be a torch.Tensor of empty shape (scalar).

Return type:

torch.Tensor

compute_func_val(net, cond, *coordinates)

Compute the function value evaluated on the points specified by coordinates.

Parameters:
  • net (torch.nn.Module) – The network to be parameterized and evaluated.
  • cond (neurodiffeq.conditions.BaseCondition) – The condition (a.k.a. parameterization) for the network.
  • coordinates (tuple[torch.Tensor]) – A tuple of coordinate components, each with shape = (-1, 1).
Returns:

Function values at the sampled points.

Return type:

torch.Tensor

fit(max_epochs, callbacks=(), tqdm_file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, **kwargs)

Run multiple epochs of training and validation, update best loss at the end of each epoch.

If callbacks is passed, callbacks are run, one at a time, after training, validating and updating best model.

Parameters:
  • max_epochs (int) – Number of epochs to run.
  • callbacks – A list of callback functions. Each function should accept the solver instance itself as its only argument.
  • tqdm_file (io.StringIO or _io.TextIOWrapper) – File to write tqdm progress bar. If set to None, tqdm is not used at all. Defaults to sys.stderr.
Rtype callbacks:
 

list[callable]

Note

  1. This method does not return solution, which is done in the .get_solution() method.
  2. A callback cb(solver) can set solver._stop_training to True to perform early stopping.
get_internals(var_names=None, return_type='list')

Return internal variable(s) of the solver

  • If var_names == ‘all’, return all internal variables as a dict.
  • If var_names is single str, return the corresponding variables.
  • If var_names is a list and return_type == ‘list’, return corresponding internal variables as a list.
  • If var_names is a list and return_type == ‘dict’, return a dict with keys in var_names.
Parameters:
  • var_names (str or list[str]) – An internal variable name or a list of internal variable names.
  • return_type (str) – {‘list’, ‘dict’}; Ignored if var_names is a string.
Returns:

A single variable, or a list/dict of internal variables as indicated above.

Return type:

list or dict or any

get_residuals(*coords, to_numpy=False, best=True, no_reshape=False)

Get the residuals of the differential equation (or system of differential equations) evaluated at given points.

Parameters:
  • coords (tuple[torch.Tensor] or tuple[np.ndarray]) – The coordinate values where the residual(s) shall be evaluated. If numpy arrays are passed, the method implicitly creates torch tensors with corresponding values.
  • to_numpy (bool) – Whether to return numpy arrays. Defaults to False.
  • best (bool) – If set to False, the network from the most recent epoch will be used to evaluate the residuals. If set to True, the network from the epoch with the lowest validation loss will be used to evaluate the residuals. Defaults to True.
  • no_reshape (bool) – If set to True, no reshaping will be performed on output. Defaults to False.
Returns:

The residuals evaluated at given points. If there is only one equation in the differential equation system, a single torch tensor (or numpy array) will be returned. If there are multiple equations, a list of torch tensors (or numpy arrays) will be returned. The returned shape will be the same as the first input coordinate, unless no_reshape is set to True. Note that the return value will always be torch tensors (even if coords are numpy arrays) unless to_numpy is explicitly set to True.

Return type:

list[torch.Tensor or numpy.array] or torch.Tensor or numpy.array

get_solution(copy=True, best=True)

Get a (callable) solution object. See this usage example:

solution = solver.get_solution()
point_coords = train_generator.get_examples()
value_at_points = solution(point_coords)
Parameters:
  • copy (bool) – Whether to make a copy of the networks so that subsequent training doesn’t affect the solution; Defaults to True.
  • best (bool) – Whether to return the solution with lowest loss instead of the solution after the last epoch. Defaults to True.
Returns:

A solution object which can be called. To evaluate the solution on certain points, you should pass the coordinates vector(s) to the returned solution.

Return type:

BaseSolution

global_epoch

Global epoch count, always equal to the length of train loss history.

Returns:Number of training epochs that have been run.
Return type:int
run_train_epoch()

Run a training epoch, update history, and perform gradient descent.

run_valid_epoch()

Run a validation epoch and update history.

class neurodiffeq.solvers.Solver2D(pde_system, conditions, xy_min=None, xy_max=None, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, loss_fn=None, n_batches_train=1, n_batches_valid=4, metrics=None, n_output_units=1, batch_size=None, shuffle=None)

Bases: neurodiffeq.solvers.BaseSolver

A solver class for solving PDEs in 2 dimensions.

Parameters:
  • pde_system (callable) – The PDE system to solve, which maps two torch.Tensor``s to PDE residuals (``tuple[torch.Tensor]), both the input and output must have shape (n_samples, 1).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – List of conditions for each target function.
  • xy_min (tuple[float, float], optional) – The lower bound of 2 dimensions. If we only care about \(x \geq x_0\) and \(y \geq y_0\), then xy_min is (x_0, y_0). Only needed when train_generator or valid_generator are not specified. Defaults to None
  • xy_max (tuple[float, float], optional) – The upper bound of 2 dimensions. If we only care about \(x \leq x_1\) and \(y \leq y_1\), then xy_min is (x_1, y_1). Only needed when train_generator or valid_generator are not specified. Defaults to None
  • nets (list[torch.nn.Module], optional) – List of neural networks for parameterized solution. If provided, length of nets must equal that of conditions
  • train_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling training points, which must provide a .get_examples() method and a .size field. train_generator must be specified if t_min and t_max are not set.
  • valid_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling validation points, which must provide a .get_examples() method and a .size field. valid_generator must be specified if t_min and t_max are not set.
  • analytic_solutions (callable, optional) – Analytical solutions to be compared with neural net solutions. It maps a torch.Tensor to a tuple of function values. Output shape should match that of nets.
  • optimizer (torch.nn.optim.Optimizer, optional) – Optimizer to be used for training. Defaults to a torch.optim.Adam instance that trains on all parameters of nets.
  • loss_fn (str or torch.nn.moduesl.loss._Loss or callable) –

    The loss function used for training.

    • If a str, must be present in the keys of neurodiffeq.losses._losses.
    • If a torch.nn.modules.loss._Loss instance, just pass the instance.
    • If any other callable, it must map A) a residual tensor (shape (n_points, n_equations)), B) a function values tuple (length n_funcs, each element a tensor of shape (n_points, 1)), and C) a coordinate values tuple (length n_coords, each element a tensor of shape (n_coords, 1) to a tensor of empty shape (i.e. a scalar). The returned tensor must be connected to the computational graph, so that backpropagation can be performed.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • metrics (dict[str, callable], optional) –

    Additional metrics to be logged (besides loss). metrics should be a dict where

    • Keys are metric names (e.g. ‘analytic_mse’);
    • Values are functions (callables) that computes the metric value. These functions must accept the same input as the differential equation ode_system.
  • n_output_units (int, optional) – Number of output units for each neural network. Ignored if nets is specified. Defaults to 1.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
additional_loss(residual, funcs, coords)

Additional loss terms for training. This method is to be overridden by subclasses. This method can use any of the internal variables: self.nets, self.conditions, self.global_epoch, etc.

Parameters:
  • residual (torch.Tensor) – Residual tensor of differential equation. It has shape (N_SAMPLES, N_EQUATIONS)
  • funcs (List[torch.Tensor]) – Outputs of the networks after parameterization. There are len(nets) entries in total. Each entry is a tensor of shape (N_SAMPLES, N_OUTPUT_UNITS).
  • coords (List[torch.Tensor]) – Inputs to the networks; a.k.a. the spatio-temporal coordinates of the system. There are N_COORDS entries in total. Each entry is a tensor of shape (N_SAMPLES, 1).
Returns:

Additional loss. Must be a torch.Tensor of empty shape (scalar).

Return type:

torch.Tensor

compute_func_val(net, cond, *coordinates)

Compute the function value evaluated on the points specified by coordinates.

Parameters:
  • net (torch.nn.Module) – The network to be parameterized and evaluated.
  • cond (neurodiffeq.conditions.BaseCondition) – The condition (a.k.a. parameterization) for the network.
  • coordinates (tuple[torch.Tensor]) – A tuple of coordinate components, each with shape = (-1, 1).
Returns:

Function values at the sampled points.

Return type:

torch.Tensor

fit(max_epochs, callbacks=(), tqdm_file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, **kwargs)

Run multiple epochs of training and validation, update best loss at the end of each epoch.

If callbacks is passed, callbacks are run, one at a time, after training, validating and updating best model.

Parameters:
  • max_epochs (int) – Number of epochs to run.
  • callbacks – A list of callback functions. Each function should accept the solver instance itself as its only argument.
  • tqdm_file (io.StringIO or _io.TextIOWrapper) – File to write tqdm progress bar. If set to None, tqdm is not used at all. Defaults to sys.stderr.
Rtype callbacks:
 

list[callable]

Note

  1. This method does not return solution, which is done in the .get_solution() method.
  2. A callback cb(solver) can set solver._stop_training to True to perform early stopping.
get_internals(var_names=None, return_type='list')

Return internal variable(s) of the solver

  • If var_names == ‘all’, return all internal variables as a dict.
  • If var_names is single str, return the corresponding variables.
  • If var_names is a list and return_type == ‘list’, return corresponding internal variables as a list.
  • If var_names is a list and return_type == ‘dict’, return a dict with keys in var_names.
Parameters:
  • var_names (str or list[str]) – An internal variable name or a list of internal variable names.
  • return_type (str) – {‘list’, ‘dict’}; Ignored if var_names is a string.
Returns:

A single variable, or a list/dict of internal variables as indicated above.

Return type:

list or dict or any

get_residuals(*coords, to_numpy=False, best=True, no_reshape=False)

Get the residuals of the differential equation (or system of differential equations) evaluated at given points.

Parameters:
  • coords (tuple[torch.Tensor] or tuple[np.ndarray]) – The coordinate values where the residual(s) shall be evaluated. If numpy arrays are passed, the method implicitly creates torch tensors with corresponding values.
  • to_numpy (bool) – Whether to return numpy arrays. Defaults to False.
  • best (bool) – If set to False, the network from the most recent epoch will be used to evaluate the residuals. If set to True, the network from the epoch with the lowest validation loss will be used to evaluate the residuals. Defaults to True.
  • no_reshape (bool) – If set to True, no reshaping will be performed on output. Defaults to False.
Returns:

The residuals evaluated at given points. If there is only one equation in the differential equation system, a single torch tensor (or numpy array) will be returned. If there are multiple equations, a list of torch tensors (or numpy arrays) will be returned. The returned shape will be the same as the first input coordinate, unless no_reshape is set to True. Note that the return value will always be torch tensors (even if coords are numpy arrays) unless to_numpy is explicitly set to True.

Return type:

list[torch.Tensor or numpy.array] or torch.Tensor or numpy.array

get_solution(copy=True, best=True)

Get a (callable) solution object. See this usage example:

solution = solver.get_solution()
point_coords = train_generator.get_examples()
value_at_points = solution(point_coords)
Parameters:
  • copy (bool) – Whether to make a copy of the networks so that subsequent training doesn’t affect the solution; Defaults to True.
  • best (bool) – Whether to return the solution with lowest loss instead of the solution after the last epoch. Defaults to True.
Returns:

A solution object which can be called. To evaluate the solution on certain points, you should pass the coordinates vector(s) to the returned solution.

Return type:

BaseSolution

global_epoch

Global epoch count, always equal to the length of train loss history.

Returns:Number of training epochs that have been run.
Return type:int
run_train_epoch()

Run a training epoch, update history, and perform gradient descent.

run_valid_epoch()

Run a validation epoch and update history.

class neurodiffeq.solvers.SolverSpherical(pde_system, conditions, r_min=None, r_max=None, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, loss_fn=None, n_batches_train=1, n_batches_valid=4, metrics=None, enforcer=None, n_output_units=1, shuffle=None, batch_size=None)

Bases: neurodiffeq.solvers.BaseSolver

A solver class for solving PDEs in spherical coordinates

Parameters:
  • pde_system (callable) – The PDE system to solve, which maps a tuple of three coordinates to a tuple of PDE residuals, both the coordinates and PDE residuals must have shape (n_samples, 1).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – List of boundary conditions for each target function.
  • r_min (float, optional) – Radius for inner boundary (\(r_0>0\)). Ignored if train_generator and valid_generator are both set.
  • r_max (float, optional) – Radius for outer boundary (\(r_1>r_0\)). Ignored if train_generator and valid_generator are both set.
  • nets (list[torch.nn.Module], optional) – List of neural networks for parameterized solution. If provided, length of nets must equal that of conditions
  • train_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling training points, which must provide a .get_examples() method and a .size field. train_generator must be specified if r_min and r_max are not set.
  • valid_generator (neurodiffeq.generators.BaseGenerator, optional) – Generator for sampling validation points, which must provide a .get_examples() method and a .size field. valid_generator must be specified if r_min and r_max are not set.
  • analytic_solutions (callable, optional) – Analytical solutions to be compared with neural net solutions. It maps a tuple of three coordinates to a tuple of function values. Output shape should match that of nets.
  • optimizer (torch.nn.optim.Optimizer, optional) – Optimizer to be used for training. Defaults to a torch.optim.Adam instance that trains on all parameters of nets.
  • loss_fn (str or torch.nn.moduesl.loss._Loss or callable) –

    The loss function used for training.

    • If a str, must be present in the keys of neurodiffeq.losses._losses.
    • If a torch.nn.modules.loss._Loss instance, just pass the instance.
    • If any other callable, it must map A) a residual tensor (shape (n_points, n_equations)), B) a function values tuple (length n_funcs, each element a tensor of shape (n_points, 1)), and C) a coordinate values tuple (length n_coords, each element a tensor of shape (n_coords, 1) to a tensor of empty shape (i.e. a scalar). The returned tensor must be connected to the computational graph, so that backpropagation can be performed.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • metrics (dict, optional) –

    Additional metrics to be logged (besides loss). metrics should be a dict where

    • Keys are metric names (e.g. ‘analytic_mse’);
    • Values are functions (callables) that computes the metric value. These functions must accept the same input as the differential equation diff_eq.
  • enforcer (callable) – A function of signature enforcer(net: nn.Module, cond: neurodiffeq.conditions.BaseCondition, coords: Tuple[torch.Tensor]) -> torch.Tensor that returns the dependent variable value evaluated on the batch.
  • n_output_units (int, optional) – Number of output units for each neural network. Ignored if nets is specified. Defaults to 1.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
additional_loss(residual, funcs, coords)

Additional loss terms for training. This method is to be overridden by subclasses. This method can use any of the internal variables: self.nets, self.conditions, self.global_epoch, etc.

Parameters:
  • residual (torch.Tensor) – Residual tensor of differential equation. It has shape (N_SAMPLES, N_EQUATIONS)
  • funcs (List[torch.Tensor]) – Outputs of the networks after parameterization. There are len(nets) entries in total. Each entry is a tensor of shape (N_SAMPLES, N_OUTPUT_UNITS).
  • coords (List[torch.Tensor]) – Inputs to the networks; a.k.a. the spatio-temporal coordinates of the system. There are N_COORDS entries in total. Each entry is a tensor of shape (N_SAMPLES, 1).
Returns:

Additional loss. Must be a torch.Tensor of empty shape (scalar).

Return type:

torch.Tensor

compute_func_val(net, cond, *coordinates)

Enforce condition on network with inputs. If self.enforcer is set, use it. Otherwise, fill cond.enforce() with as many arguments as needed.

Parameters:
  • net (torch.nn.Module) – Network for parameterized solution.
  • cond (neurodiffeq.conditions.BaseCondition) – Condition (a.k.a. parameterization) for the network.
  • coordinates (tuple[torch.Tensor]) – A tuple of vectors, each with shape = (-1, 1).
Returns:

Function values at sampled points.

Return type:

torch.Tensor

fit(max_epochs, callbacks=(), tqdm_file=<_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, **kwargs)

Run multiple epochs of training and validation, update best loss at the end of each epoch.

If callbacks is passed, callbacks are run, one at a time, after training, validating and updating best model.

Parameters:
  • max_epochs (int) – Number of epochs to run.
  • callbacks – A list of callback functions. Each function should accept the solver instance itself as its only argument.
  • tqdm_file (io.StringIO or _io.TextIOWrapper) – File to write tqdm progress bar. If set to None, tqdm is not used at all. Defaults to sys.stderr.
Rtype callbacks:
 

list[callable]

Note

  1. This method does not return solution, which is done in the .get_solution() method.
  2. A callback cb(solver) can set solver._stop_training to True to perform early stopping.
get_internals(var_names=None, return_type='list')

Return internal variable(s) of the solver

  • If var_names == ‘all’, return all internal variables as a dict.
  • If var_names is single str, return the corresponding variables.
  • If var_names is a list and return_type == ‘list’, return corresponding internal variables as a list.
  • If var_names is a list and return_type == ‘dict’, return a dict with keys in var_names.
Parameters:
  • var_names (str or list[str]) – An internal variable name or a list of internal variable names.
  • return_type (str) – {‘list’, ‘dict’}; Ignored if var_names is a string.
Returns:

A single variable, or a list/dict of internal variables as indicated above.

Return type:

list or dict or any

get_residuals(*coords, to_numpy=False, best=True, no_reshape=False)

Get the residuals of the differential equation (or system of differential equations) evaluated at given points.

Parameters:
  • coords (tuple[torch.Tensor] or tuple[np.ndarray]) – The coordinate values where the residual(s) shall be evaluated. If numpy arrays are passed, the method implicitly creates torch tensors with corresponding values.
  • to_numpy (bool) – Whether to return numpy arrays. Defaults to False.
  • best (bool) – If set to False, the network from the most recent epoch will be used to evaluate the residuals. If set to True, the network from the epoch with the lowest validation loss will be used to evaluate the residuals. Defaults to True.
  • no_reshape (bool) – If set to True, no reshaping will be performed on output. Defaults to False.
Returns:

The residuals evaluated at given points. If there is only one equation in the differential equation system, a single torch tensor (or numpy array) will be returned. If there are multiple equations, a list of torch tensors (or numpy arrays) will be returned. The returned shape will be the same as the first input coordinate, unless no_reshape is set to True. Note that the return value will always be torch tensors (even if coords are numpy arrays) unless to_numpy is explicitly set to True.

Return type:

list[torch.Tensor or numpy.array] or torch.Tensor or numpy.array

get_solution(copy=True, best=True, harmonics_fn=None)

Get a (callable) solution object. See this usage example:

solution = solver.get_solution()
point_coords = train_generator.get_examples()
value_at_points = solution(point_coords)
Parameters:
  • copy (bool) – Whether to make a copy of the networks so that subsequent training doesn’t affect the solution; Defaults to True.
  • best (bool) – Whether to return the solution with lowest loss instead of the solution after the last epoch. Defaults to True.
  • harmonics_fn (callable) – If set, use it as function basis for returned solution.
Returns:

The solution after training.

Return type:

neurodiffeq.solvers.BaseSolution

global_epoch

Global epoch count, always equal to the length of train loss history.

Returns:Number of training epochs that have been run.
Return type:int
run_train_epoch()

Run a training epoch, update history, and perform gradient descent.

run_valid_epoch()

Run a validation epoch and update history.

neurodiffeq.monitors

class neurodiffeq.monitors.BaseMonitor(check_every=None)

Bases: abc.ABC

A tool for checking the status of the neural network during training.

A monitor keeps track of a matplotlib.figure.Figure instance and updates the plot whenever its check() method is called (usually by a neurodiffeq.solvers.BaseSolver instance).

Note

Currently, the check() method can only run synchronously. It blocks the training / validation process, so don’t call the check() method too often.

to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

class neurodiffeq.monitors.MetricsMonitor(check_every=None)

Bases: neurodiffeq.monitors.BaseMonitor

A monitor for visualizing the loss and other metrics. This monitor does not visualize the solution.

Parameters:check_every (int, optional) – The frequency of checking the neural network represented by the number of epochs between two checks. Defaults to 100.
to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

class neurodiffeq.monitors.Monitor1D(t_min, t_max, check_every=None)

Bases: neurodiffeq.monitors.BaseMonitor

A monitor for checking the status of the neural network during training.

Parameters:
  • t_min (float) – The lower bound of time domain that we want to monitor.
  • t_max (float) – The upper bound of time domain that we want to monitor.
  • check_every (int, optional) – The frequency of checking the neural network represented by the number of epochs between two checks. Defaults to 100.
check(nets, conditions, history)

Draw 2 plots: One shows the shape of the current solution. The other shows the history training loss and validation loss.

Parameters:
  • nets (list[torch.nn.Module]) – The neural networks that approximates the ODE (system).
  • conditions (list[neurodiffeq.ode.BaseCondition]) – The initial/boundary conditions of the ODE (system).
  • history (dict['train': list[float], 'valid': list[float]]) – The history of training loss and validation loss. The ‘train_loss’ entry is a list of training loss and ‘valid_loss’ entry is a list of validation loss.

Note

check is meant to be called by the function solve and solve_system.

to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

class neurodiffeq.monitors.Monitor2D(xy_min, xy_max, check_every=None, valid_generator=None, solution_style='heatmap', equal_aspect=True, ax_width=5.0, ax_height=4.0, n_col=2, levels=20)

Bases: neurodiffeq.monitors.BaseMonitor

A monitor for checking the status of the neural network during training. The number and layout of subplots (matplotlib axes) will be finalized after the first .check() call.

Parameters:
  • xy_min (tuple[float, float], optional) – The lower bound of 2 dimensions. If we only care about \(x \geq x_0\) and \(y \geq y_0\), then xy_min is (x_0, y_0).
  • xy_max (tuple[float, float], optional) – The upper bound of 2 dimensions. If we only care about \(x \leq x_1\) and \(y \leq y_1\), then xy_min is (x_1, y_1).
  • check_every (int, optional) – The frequency of checking the neural network represented by the number of epochs between two checks. Defaults to 100.
  • valid_generator (neurodiffeq.generators.BaseGenerator) – The generator used to sample points from the domain when visualizing the solution. The generator is only called once (during instantiating the generator), and its outputs are stored. Defaults to a 32x32 Generator2D with method ‘equally-spaced’.
  • solution_style (str) –
    • If set to ‘heatmap’, solution visualization will be a contour heat map of \(u\) w.r.t. \(x\) and \(y\). Useful when visualizing a 2-D spatial solution.
    • If set to ‘curves’, solution visualization will be \(u\)-\(x\) curves instead of a 2d heat map. Each curve corresponds to a \(t\) value. Useful when visualizing 1D spatio-temporal solution. The first coordinate is interpreted as \(x\) and the second as \(t\).

    Defaults to ‘heatmap’.

  • equal_aspect (bool) – Whether to set aspect ratio to 1:1 for heatmap. Defaults to True. Ignored if solutions_style is ‘curves’.
  • ax_width (float) – Width for each solution visualization. Note that this is different from width for metrics history, which is equal to ax_width \(\times\) n_cols.
  • ax_height (float) – Height for each solution visualization and metrics history plot.
  • n_col (int) – Number of solution visualizations to plot in each row. Note there is always only 1 plot for metrics history plot per row.
  • levels (int) – Number of levels to plot with contourf (heatmap). Defaults to 20.
check(nets, conditions, history)

Draw 2 plots: One shows the shape of the current solution (with heat map). The other shows the history training loss and validation loss.

Parameters:
  • nets (list [torch.nn.Module]) – The neural networks that approximates the PDE.
  • conditions (list [neurodiffeq.conditions.BaseCondition]) – The initial/boundary condition of the PDE.
  • history (dict['train': list[float], 'valid': list[float]]) – The history of training loss and validation loss. The ‘train’ entry is a list of training loss and ‘valid’ entry is a list of validation loss.

Note

check is meant to be called by the function solve2D.

to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

class neurodiffeq.monitors.MonitorSpherical(r_min, r_max, check_every=None, var_names=None, shape=(10, 10, 10), r_scale='linear', theta_min=0.0, theta_max=3.141592653589793, phi_min=0.0, phi_max=6.283185307179586)

Bases: neurodiffeq.monitors.BaseMonitor

A monitor for checking the status of the neural network during training.

Parameters:
  • r_min (float) – The lower bound of radius, i.e., radius of interior boundary.
  • r_max (float) – The upper bound of radius, i.e., radius of exterior boundary.
  • check_every (int, optional) – The frequency of checking the neural network represented by the number of epochs between two checks. Defaults to 100.
  • var_names (list[str]) – Names of dependent variables. If provided, shall be used for plot titles. Defaults to None.
  • shape (tuple[int]) – Shape of mesh for visualizing the solution. Defaults to (10, 10, 10).
  • r_scale (str) – ‘linear’ or ‘log’. Controls the grid point in the \(r\) direction. Defaults to ‘linear’.
  • theta_min (float) – The lower bound of polar angle. Defaults to \(0\).
  • theta_max (float) – The upper bound of polar angle. Defaults to \(\pi\).
  • phi_min (float) – The lower bound of azimuthal angle. Defaults to \(0\).
  • phi_max (float) – The upper bound of azimuthal angle. Defaults to \(2\pi\).
check(nets, conditions, history, analytic_mse_history=None)

Draw (3n + 2) plots

  1. For each function \(u_i(r, \phi, \theta)\), there are 3 axes:
    • one ax for \(u\)-\(r\) curves grouped by \(\phi\)
    • one ax for \(u\)-\(r\) curves grouped by \(\theta\)
    • one ax for \(u\)-\(\theta\)-\(\phi\) contour heat map
  2. Additionally, one ax for training and validaiton loss, another for the rest of the metrics
Parameters:
  • nets (list [torch.nn.Module]) – The neural networks that approximates the PDE.
  • conditions (list [neurodiffeq.conditions.BaseCondition]) – The initial/boundary condition of the PDE.
  • history (dict[str, list[float]]) – A dict of history of training metrics and validation metrics, where keys are metric names (str) and values are list of metrics values (list[float]). It must contain a ‘train_loss’ key and a ‘valid_loss’ key.
  • analytic_mse_history (dict['train': list[float], 'valid': list[float]], deprecated) – [DEPRECATED] Include ‘train_analytic_mse’ and ‘valid_analytic_mse’ in history instead.

Note

check is meant to be called by neurodiffeq.solvers.BaseSolver.

customization()

Customized tweaks can be implemented by overwriting this method.

set_variable_count(n)

Manually set the number of scalar fields to be visualized; If not set, defaults to length of nets passed to self.check() every time self.check() is called.

Parameters:n (int) – number of scalar fields to overwrite default
Returns:self
to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

unset_variable_count()

Manually unset the number of scalar fields to be visualized; Once unset, the number defaults to length of nets passed to self.check() every time self.check() is called.

Returns:self
class neurodiffeq.monitors.MonitorSphericalHarmonics(r_min, r_max, check_every=None, var_names=None, shape=(10, 10, 10), r_scale='linear', harmonics_fn=None, theta_min=0.0, theta_max=3.141592653589793, phi_min=0.0, phi_max=6.283185307179586, max_degree=None)

Bases: neurodiffeq.monitors.MonitorSpherical

A monitor for checking the status of the neural network during training.

Parameters:
  • r_min (float) – The lower bound of radius, i.e., radius of interior boundary.
  • r_max (float) – The upper bound of radius, i.e., radius of exterior boundary.
  • check_every (int, optional) – The frequency of checking the neural network represented by the number of epochs between two checks. Defaults to 100.
  • var_names (list[str]) – The names of dependent variables; if provided, shall be used for plot titles. Defaults to None
  • shape (tuple[int]) – Shape of mesh for visualizing the solution. Defaults to (10, 10, 10).
  • r_scale (str) – ‘linear’ or ‘log’. Controls the grid point in the \(r\) direction. Defaults to ‘linear’.
  • harmonics_fn (callable) – A mapping from \(\theta\) and \(\phi\) to basis functions, e.g., spherical harmonics.
  • theta_min (float) – The lower bound of polar angle. Defaults to \(0\)
  • theta_max (float) – The upper bound of polar angle. Defaults to \(\pi\).
  • phi_min (float) – The lower bound of azimuthal angle. Defaults to \(0\).
  • phi_max (float) – The upper bound of azimuthal angle. Defaults to \(2\pi\).
  • max_degree (int) – DEPRECATED and SUPERSEDED by harmonics_fn. Highest used for the harmonic basis.
check(nets, conditions, history, analytic_mse_history=None)

Draw (3n + 2) plots

  1. For each function \(u_i(r, \phi, \theta)\), there are 3 axes:
    • one ax for \(u\)-\(r\) curves grouped by \(\phi\)
    • one ax for \(u\)-\(r\) curves grouped by \(\theta\)
    • one ax for \(u\)-\(\theta\)-\(\phi\) contour heat map
  2. Additionally, one ax for training and validaiton loss, another for the rest of the metrics
Parameters:
  • nets (list [torch.nn.Module]) – The neural networks that approximates the PDE.
  • conditions (list [neurodiffeq.conditions.BaseCondition]) – The initial/boundary condition of the PDE.
  • history (dict[str, list[float]]) – A dict of history of training metrics and validation metrics, where keys are metric names (str) and values are list of metrics values (list[float]). It must contain a ‘train_loss’ key and a ‘valid_loss’ key.
  • analytic_mse_history (dict['train': list[float], 'valid': list[float]], deprecated) – [DEPRECATED] Include ‘train_analytic_mse’ and ‘valid_analytic_mse’ in history instead.

Note

check is meant to be called by neurodiffeq.solvers.BaseSolver.

customization()

Customized tweaks can be implemented by overwriting this method.

set_variable_count(n)

Manually set the number of scalar fields to be visualized; If not set, defaults to length of nets passed to self.check() every time self.check() is called.

Parameters:n (int) – number of scalar fields to overwrite default
Returns:self
to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

unset_variable_count()

Manually unset the number of scalar fields to be visualized; Once unset, the number defaults to length of nets passed to self.check() every time self.check() is called.

Returns:self
class neurodiffeq.monitors.StreamPlotMonitor2D(xy_min, xy_max, pairs, nx=32, ny=32, check_every=None, mask_fn=None, ax_width=13.0, ax_height=10.0, n_col=2, stream_kwargs=None, equal_aspect=True, field_names=None)

Bases: neurodiffeq.monitors.BaseMonitor

to_callback(fig_dir=None, format=None, logger=None)

Return a callback that updates the monitor plots, which will be run

  1. Every self.check_every epochs; and
  2. After the last local epoch.
Parameters:
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for the returned callback. Defaults to the ‘root’ logger.
Returns:

The callback that updates the monitor plots.

Return type:

neurodiffeq.callbacks.BaseCallback

neurodiffeq.ode

neurodiffeq.ode.solve(ode, condition, t_min=None, t_max=None, net=None, train_generator=None, valid_generator=None, optimizer=None, criterion=None, n_batches_train=1, n_batches_valid=4, additional_loss_term=None, metrics=None, max_epochs=1000, monitor=None, return_internal=False, return_best=False, batch_size=None, shuffle=None)

Train a neural network to solve an ODE.

Parameters:
  • ode (callable) – The ODE to solve. If the ODE is \(F(x, t) = 0\) where \(x\) is the dependent variable and \(t\) is the independent variable, then ode should be a function that maps \((x, t)\) to \(F(x, t)\).
  • condition (neurodiffeq.conditions.BaseCondition) – The initial/boundary condition.
  • net (torch.nn.Module, optional) – The neural network used to approximate the solution. Defaults to None.
  • t_min (float) – The lower bound of the domain (t) on which the ODE is solved, only needed when train_generator or valid_generator are not specified. Defaults to None
  • t_max (float) – The upper bound of the domain (t) on which the ODE is solved, only needed when train_generator or valid_generator are not specified. Defaults to None
  • train_generator (neurodiffeq.generators.Generator1D, optional) – The example generator to generate 1-D training points. Default to None.
  • valid_generator (neurodiffeq.generators.Generator1D, optional) – The example generator to generate 1-D validation points. Default to None.
  • optimizer (torch.optim.Optimizer, optional) – The optimization method to use for training. Defaults to None.
  • criterion (torch.nn.modules.loss._Loss, optional) – The loss function to use for training. Defaults to None.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • additional_loss_term (callable) – Extra terms to add to the loss function besides the part specified by criterion. The input of additional_loss_term should be the same as ode.
  • metrics (dict[string, callable]) – Metrics to keep track of during training. The metrics should be passed as a dictionary where the keys are the names of the metrics, and the values are the corresponding function. The input functions should be the same as ode and the output should be a numeric value. The metrics are evaluated on both the training set and validation set.
  • max_epochs (int, optional) – The maximum number of epochs to train. Defaults to 1000.
  • monitor (neurodiffeq.ode.Monitor, optional) – The monitor to check the status of neural network during training. Defaults to None.
  • return_internal (bool, optional) – Whether to return the nets, conditions, training generator, validation generator, optimizer and loss function. Defaults to False.
  • return_best (bool, optional) – Whether to return the nets that achieved the lowest validation loss. Defaults to False.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
Returns:

The solution of the ODE. The history of training loss and validation loss. Optionally, the nets, conditions, training generator, validation generator, optimizer and loss function.

Return type:

tuple[neurodiffeq.ode.Solution, dict] or tuple[neurodiffeq.ode.Solution, dict, dict]

Note

This function is deprecated, use a neurodiffeq.solvers.Solver1D instead.

neurodiffeq.ode.solve_system(ode_system, conditions, t_min, t_max, single_net=None, nets=None, train_generator=None, valid_generator=None, optimizer=None, criterion=None, n_batches_train=1, n_batches_valid=4, additional_loss_term=None, metrics=None, max_epochs=1000, monitor=None, return_internal=False, return_best=False, batch_size=None, shuffle=None)

Train a neural network to solve an ODE.

Parameters:
  • ode_system (callable) – The ODE system to solve. If the ODE system consists of equations \(F_i(x_1, x_2, ..., x_n, t) = 0\) where \(x_i\) is the dependent i-th variable and \(t\) is the independent variable, then ode_system should be a function that maps \((x_1, x_2, ..., x_n, t)\) to a list where the i-th entry is \(F_i(x_1, x_2, ..., x_n, t)\).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – The initial/boundary conditions. The ith entry of the conditions is the condition that \(x_i\) should satisfy.
  • t_min (float.) – The lower bound of the domain (t) on which the ODE is solved, only needed when train_generator or valid_generator are not specified. Defaults to None
  • t_max (float) – The upper bound of the domain (t) on which the ODE is solved, only needed when train_generator or valid_generator are not specified. Defaults to None.
  • single_net – The single neural network used to approximate the solution. Only one of single_net and nets should be specified. Defaults to None
  • single_nettorch.nn.Module, optional
  • nets (list[torch.nn.Module], optional) – The neural networks used to approximate the solution. Defaults to None.
  • train_generator (neurodiffeq.generators.Generator1D, optional) – The example generator to generate 1-D training points. Default to None.
  • valid_generator (neurodiffeq.generators.Generator1D, optional) – The example generator to generate 1-D validation points. Default to None.
  • optimizer (torch.optim.Optimizer, optional) – The optimization method to use for training. Defaults to None.
  • criterion (torch.nn.modules.loss._Loss, optional) – The loss function to use for training. Defaults to None and sum of square of the output of ode_system will be used.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • additional_loss_term (callable) – Extra terms to add to the loss function besides the part specified by criterion. The input of additional_loss_term should be the same as ode_system.
  • metrics (dict[str, callable], optional) –

    Additional metrics to be logged (besides loss). metrics should be a dict where

    • Keys are metric names (e.g. ‘analytic_mse’);
    • Values are functions (callables) that computes the metric value. These functions must accept the same input as the differential equation ode_system.
  • max_epochs (int, optional) – The maximum number of epochs to train. Defaults to 1000.
  • monitor (neurodiffeq.ode.Monitor, optional) – The monitor to check the status of nerual network during training. Defaults to None.
  • return_internal (bool, optional) – Whether to return the nets, conditions, training generator, validation generator, optimizer and loss function. Defaults to False.
  • return_best (bool, optional) – Whether to return the nets that achieved the lowest validation loss. Defaults to False.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
Returns:

The solution of the ODE. The history of training loss and validation loss. Optionally, the nets, conditions, training generator, validation generator, optimizer and loss function.

Return type:

tuple[neurodiffeq.ode.Solution, dict] or tuple[neurodiffeq.ode.Solution, dict, dict]

Note

This function is deprecated, use a neurodiffeq.solvers.Solver1D instead.

neurodiffeq.pde

class neurodiffeq.pde.CustomBoundaryCondition(center_point, dirichlet_control_points, neumann_control_points=None)

Bases: neurodiffeq.conditions.IrregularBoundaryCondition

A boundary condition with irregular shape.

Parameters:
  • center_point (pde.Point) – A point that roughly locate at the center of the domain. It will be used to sort the control points ‘clockwise’.
  • dirichlet_control_points (list[pde.DirichletControlPoint]) – a list of points on the Dirichlet boundary
enforce(net, *dimensions)

Enforces this condition on a network.

Parameters:
  • net (torch.nn.Module) – The network whose output is to be re-parameterized.
  • coordinates (torch.Tensor) – Inputs of the neural network.
Returns:

The re-parameterized output, where the condition is automatically satisfied.

Return type:

torch.Tensor

in_domain(*dimensions)

Given the coordinates (numpy.ndarray), the methods returns an boolean array indicating whether the points lie within the domain.

Parameters:coordinates (numpy.ndarray) – Input tensors, each with shape (n_samples, 1).
Returns:Whether each point lies within the domain.
Return type:numpy.ndarray

Note

  • This method is meant to be used by monitors for irregular domain visualization.
parameterize(output_tensor, *input_tensors)

Re-parameterizes output(s) of a network.

Parameters:
  • output_tensor (torch.Tensor) – Output of the neural network.
  • input_tensors (torch.Tensor) – Inputs to the neural network; i.e., sampled coordinates; i.e., independent variables.
Returns:

The re-parameterized output of the network.

Return type:

torch.Tensor

Note

This method is abstract for BaseCondition

set_impose_on(ith_unit)

[DEPRECATED] When training several functions with a single, multi-output network, this method is called (by a Solver class or a solve function) to keep track of which output is being parameterized.

Parameters:ith_unit (int) – The index of network output to be parameterized.

Note

This method is deprecated and retained for backward compatibility only. Users interested in enforcing conditions on multi-output networks should consider using a neurodiffeq.conditions.EnsembleCondition.

class neurodiffeq.pde.DirichletControlPoint(loc, val)

Bases: neurodiffeq.pde.Point

A 2D point on the Dirichlet boundary.

Parameters:
  • loc (tuple[float, float]) – The location of the point in the form of \((x, y)\).
  • val (float) – The expected value of \(u\) at this location.(\(u(x, y)\) is the function we are solving for)
class neurodiffeq.pde.NeumannControlPoint(loc, val, normal_vector)

Bases: neurodiffeq.pde.Point

A 2D point on the Neumann boundary.

Parameters:
  • loc (tuple[float, float]) – The location of the point in the form of \((x, y)\).
  • val (float) – The expected normal derivative of \(u\) at this location. (\(u(x, y)\) is the function we are solving for)
class neurodiffeq.pde.Point(loc)

Bases: object

A 2D point.

Parameters:loc (tuple[float, float]) – The location of the point in the form of \((x, y)\).
neurodiffeq.pde.make_animation(solution, xs, ts)

Create animation of 1-D time-dependent problems.

Parameters:
  • solution (callable) – Solution function returned by solve2D (for a 1-D time-dependent problem).
  • xs (numpy.array) – The locations to evaluate solution.
  • ts (numpy.array) – The time points to evaluate solution.
Returns:

The animation.

Return type:

matplotlib.animation.FuncAnimation

neurodiffeq.pde.solve2D(pde, condition, xy_min=None, xy_max=None, net=None, train_generator=None, valid_generator=None, optimizer=None, criterion=None, n_batches_train=1, n_batches_valid=4, additional_loss_term=None, metrics=None, max_epochs=1000, monitor=None, return_internal=False, return_best=False, batch_size=None, shuffle=None)

Train a neural network to solve a PDE with 2 independent variables.

Parameters:
  • pde (callable) – The PDE to solve. If the PDE is \(F(u, x, y) = 0\) where \(u\) is the dependent variable and \(x\) and \(y\) are the independent variables, then pde should be a function that maps \((u, x, y)\) to \(F(u, x, y)\).
  • condition (neurodiffeq.conditions.BaseCondition) – The initial/boundary condition.
  • xy_min (tuple[float, float], optional) – The lower bound of 2 dimensions. If we only care about \(x \geq x_0\) and \(y \geq y_0\), then xy_min is (x_0, y_0), only needed when train_generator and valid_generator are not specified. Defaults to None
  • xy_max (tuple[float, float], optional) – The upper bound of 2 dimensions. If we only care about \(x \leq x_1\) and \(y \leq y_1\), then xy_min is (x_1, y_1), only needed when train_generator and valid_generator are not specified. Defaults to None
  • net (torch.nn.Module, optional) – The neural network used to approximate the solution. Defaults to None.
  • train_generator (neurodiffeq.generators.Generator2D, optional) – The example generator to generate 1-D training points. Default to None.
  • valid_generator (neurodiffeq.generators.Generator2D, optional) – The example generator to generate 1-D validation points. Default to None.
  • optimizer (torch.optim.Optimizer, optional) – The optimization method to use for training. Defaults to None.
  • criterion (torch.nn.modules.loss._Loss, optional) – The loss function to use for training. Defaults to None.
  • additional_loss_term (callable) – Extra terms to add to the loss function besides the part specified by criterion. The input of additional_loss_term should be the same as pde_system.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • metrics (dict[string, callable]) – Metrics to keep track of during training. The metrics should be passed as a dictionary where the keys are the names of the metrics, and the values are the corresponding function. The input functions should be the same as pde and the output should be a numeric value. The metrics are evaluated on both the training set and validation set.
  • max_epochs (int, optional) – The maximum number of epochs to train. Defaults to 1000.
  • monitor (neurodiffeq.pde.Monitor2D, optional) – The monitor to check the status of neural network during training. Defaults to None.
  • return_internal (bool, optional) – Whether to return the nets, conditions, training generator, validation generator, optimizer and loss function. Defaults to False.
  • return_best (bool, optional) – Whether to return the nets that achieved the lowest validation loss. Defaults to False.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
Returns:

The solution of the PDE. The history of training loss and validation loss. Optionally, the nets, conditions, training generator, validation generator, optimizer and loss function. The solution is a function that has the signature solution(xs, ys, as_type).

Return type:

tuple[neurodiffeq.pde.Solution, dict] or tuple[neurodiffeq.pde.Solution, dict, dict]

Note

This function is deprecated, use a neurodiffeq.solvers.Solver2D instead.

neurodiffeq.pde.solve2D_system(pde_system, conditions, xy_min=None, xy_max=None, single_net=None, nets=None, train_generator=None, valid_generator=None, optimizer=None, criterion=None, n_batches_train=1, n_batches_valid=4, additional_loss_term=None, metrics=None, max_epochs=1000, monitor=None, return_internal=False, return_best=False, batch_size=None, shuffle=None)

Train a neural network to solve a PDE with 2 independent variables.

Parameters:
  • pde_system (callable) – The PDE system to solve. If the PDE is \(F_i(u_1, u_2, ..., u_n, x, y) = 0\) where \(u_i\) is the i-th dependent variable and \(x\) and \(y\) are the independent variables, then pde_system should be a function that maps \((u_1, u_2, ..., u_n, x, y)\) to a list where the i-th entry is \(F_i(u_1, u_2, ..., u_n, x, y)\).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – The initial/boundary conditions. The ith entry of the conditions is the condition that \(x_i\) should satisfy.
  • xy_min (tuple[float, float], optional) – The lower bound of 2 dimensions. If we only care about \(x \geq x_0\) and \(y \geq y_0\), then xy_min is (x_0, y_0). Only needed when train_generator or valid_generator are not specified. Defaults to None
  • xy_max (tuple[float, float], optional) – The upper bound of 2 dimensions. If we only care about \(x \leq x_1\) and \(y \leq y_1\), then xy_min is (x_1, y_1). Only needed when train_generator or valid_generator are not specified. Defaults to None
  • single_net – The single neural network used to approximate the solution. Only one of single_net and nets should be specified. Defaults to None
  • single_nettorch.nn.Module, optional
  • nets (list[torch.nn.Module], optional) – The neural networks used to approximate the solution. Defaults to None.
  • train_generator (neurodiffeq.generators.Generator2D, optional) – The example generator to generate 1-D training points. Default to None.
  • valid_generator (neurodiffeq.generators.Generator2D, optional) – The example generator to generate 1-D validation points. Default to None.
  • optimizer (torch.optim.Optimizer, optional) – The optimization method to use for training. Defaults to None.
  • criterion (torch.nn.modules.loss._Loss, optional) – The loss function to use for training. Defaults to None.
  • n_batches_train (int, optional) – Number of batches to train in every epoch, where batch-size equals train_generator.size. Defaults to 1.
  • n_batches_valid (int, optional) – Number of batches to validate in every epoch, where batch-size equals valid_generator.size. Defaults to 4.
  • additional_loss_term (callable) – Extra terms to add to the loss function besides the part specified by criterion. The input of additional_loss_term should be the same as pde_system.
  • metrics (dict[string, callable]) – Metrics to keep track of during training. The metrics should be passed as a dictionary where the keys are the names of the metrics, and the values are the corresponding function. The input functions should be the same as pde_system and the output should be a numeric value. The metrics are evaluated on both the training set and validation set.
  • max_epochs (int, optional) – The maximum number of epochs to train. Defaults to 1000.
  • monitor (neurodiffeq.pde.Monitor2D, optional) – The monitor to check the status of nerual network during training. Defaults to None.
  • return_internal (bool, optional) – Whether to return the nets, conditions, training generator, validation generator, optimizer and loss function. Defaults to False.
  • return_best (bool, optional) – Whether to return the nets that achieved the lowest validation loss. Defaults to False.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
Returns:

The solution of the PDE. The history of training loss and validation loss. Optionally, the nets, conditions, training generator, validation generator, optimizer and loss function. The solution is a function that has the signature solution(xs, ys, as_type).

Return type:

tuple[neurodiffeq.pde.Solution, dict] or tuple[neurodiffeq.pde.Solution, dict, dict]

Note

This function is deprecated, use a neurodiffeq.solvers.Solver2D instead.

neurodiffeq.pde_spherical

neurodiffeq.pde_spherical.solve_spherical(pde, condition, r_min=None, r_max=None, net=None, train_generator=None, valid_generator=None, analytic_solution=None, optimizer=None, criterion=None, max_epochs=1000, monitor=None, return_internal=False, return_best=False, harmonics_fn=None, batch_size=None, shuffle=None)

[DEPRECATED, use SphericalSolver class instead] Train a neural network to solve one PDE with spherical inputs in 3D space.

Parameters:
  • pde (callable) – The PDE to solve. If the PDE is \(F(u, r,\theta, \phi) = 0\), where \(u\) is the dependent variable and \(r\), \(\theta\) and \(\phi\) are the independent variables, then pde should be a function that maps \((u, r, \theta, \phi)\) to \(F(u, r,\theta, \phi)\).
  • condition (neurodiffeq.conditions.BaseCondition) – The initial/boundary condition that \(u\) should satisfy.
  • r_min (float, optional) – Radius for inner boundary; ignored if both generators are provided.
  • r_max (float, optional) – Radius for outer boundary; ignored if both generators are provided.
  • net (torch.nn.Module, optional) – The neural network used to approximate the solution. Defaults to None.
  • train_generator (neurodiffeq.generators.BaseGenerator, optional) – The example generator to generate 3-D training points. Default to None.
  • valid_generator (neurodiffeq.generators.BaseGenerator, optional) – The example generator to generate 3-D validation points. Default to None.
  • analytic_solution (callable) – Analytic solution to the pde system, used for testing purposes. It should map (rs, thetas, phis) to u.
  • optimizer (torch.optim.Optimizer, optional) – The optimization method to use for training. Defaults to None.
  • criterion (torch.nn.modules.loss._Loss, optional) – The loss function to use for training. Defaults to None.
  • max_epochs (int, optional) – The maximum number of epochs to train. Defaults to 1000.
  • monitor (neurodiffeq.pde_spherical.MonitorSpherical, optional) – The monitor to check the status of neural network during training. Defaults to None.
  • return_internal (bool, optional) – Whether to return the nets, conditions, training generator, validation generator, optimizer and loss function. Defaults to False.
  • return_best (bool, optional) – Whether to return the nets that achieved the lowest validation loss. Defaults to False.
  • harmonics_fn (callable) – Function basis (spherical harmonics for example) if solving coefficients of a function basis. Used when returning the solution.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
Returns:

The solution of the PDE. The history of training loss and validation loss. Optionally, MSE against analytic solution, the nets, conditions, training generator, validation generator, optimizer and loss function. The solution is a function that has the signature solution(xs, ys, as_type).

Return type:

tuple[neurodiffeq.pde_spherical.SolutionSpherical, dict] or tuple[neurodiffeq.pde_spherical.SolutionSpherical, dict, dict]

Note

This function is deprecated, use a neurodiffeq.solvers.SphericalSolver instead

neurodiffeq.pde_spherical.solve_spherical_system(pde_system, conditions, r_min=None, r_max=None, nets=None, train_generator=None, valid_generator=None, analytic_solutions=None, optimizer=None, criterion=None, max_epochs=1000, monitor=None, return_internal=False, return_best=False, harmonics_fn=None, batch_size=None, shuffle=None)

[DEPRECATED, use SphericalSolver class instead] Train a neural network to solve a PDE system with spherical inputs in 3D space

Parameters:
  • pde_system (callable) – The PDEs ystem to solve. If the PDE is \(F_i(u_1, u_2, ..., u_n, r,\theta, \phi) = 0\) where \(u_i\) is the i-th dependent variable and \(r\), \(\theta\) and \(\phi\) are the independent variables, then pde_system should be a function that maps \((u_1, u_2, ..., u_n, r, \theta, \phi)\) to a list where the i-th entry is \(F_i(u_1, u_2, ..., u_n, r, \theta, \phi)\).
  • conditions (list[neurodiffeq.conditions.BaseCondition]) – The initial/boundary conditions. The ith entry of the conditions is the condition that \(u_i\) should satisfy.
  • r_min (float, optional) – Radius for inner boundary. Ignored if both generators are provided.
  • r_max (float, optional) – Radius for outer boundary. Ignored if both generators are provided.
  • nets (list[torch.nn.Module], optional) – The neural networks used to approximate the solution. Defaults to None.
  • train_generator (neurodiffeq.generators.BaseGenerator, optional) – The example generator to generate 3-D training points. Default to None.
  • valid_generator (neurodiffeq.generators.BaseGenerator, optional) – The example generator to generate 3-D validation points. Default to None.
  • analytic_solutions (callable) – Analytic solution to the pde system, used for testing purposes. It should map (rs, thetas, phis) to a list of [u_1, u_2, …, u_n].
  • optimizer (torch.optim.Optimizer, optional) – The optimization method to use for training. Defaults to None.
  • criterion (torch.nn.modules.loss._Loss, optional) – The loss function to use for training. Defaults to None.
  • max_epochs (int, optional) – The maximum number of epochs to train. Defaults to 1000.
  • monitor (neurodiffeq.pde_spherical.MonitorSpherical, optional) – The monitor to check the status of neural network during training. Defaults to None.
  • return_internal (bool, optional) – Whether to return the nets, conditions, training generator, validation generator, optimizer and loss function. Defaults to False.
  • return_best (bool, optional) – Whether to return the nets that achieved the lowest validation loss. Defaults to False.
  • harmonics_fn (callable) – Function basis (spherical harmonics for example) if solving coefficients of a function basis. Used when returning solution.
  • batch_size (int) – [DEPRECATED and IGNORED] Each batch will use all samples generated. Please specify n_batches_train and n_batches_valid instead.
  • shuffle (bool) – [DEPRECATED and IGNORED] Shuffling should be performed by generators.
Returns:

The solution of the PDE. The history of training loss and validation loss. Optionally, MSE against analytic solutions, the nets, conditions, training generator, validation generator, optimizer and loss function. The solution is a function that has the signature solution(xs, ys, as_type).

Return type:

tuple[neurodiffeq.pde_spherical.SolutionSpherical, dict] or tuple[neurodiffeq.pde_spherical.SolutionSpherical, dict, dict]

Note

This function is deprecated, use a neurodiffeq.solvers.SphericalSolver instead

neurodiffeq.temporal

class neurodiffeq.temporal.Approximator

Bases: abc.ABC

The base class of approximators. An approximator is an approximation of the differential equation’s solution. It knows the parameters in the neural network, and how to calculate the loss function and the metrics.

class neurodiffeq.temporal.BoundaryCondition(form, points_generator)

Bases: object

A boundary condition. It is used to initialize temporal.Approximators.

Parameters:
  • form (callable) –

    The form of the boundary condition.

    • For a 1D time-dependent problem, if the boundary condition demands that \(B(u, x) = 0\), then form should be a function that maps \(u, x, t\) to \(B(u, x)\).
    • For a 2D steady-state problem, if the boundary condition demands that \(B(u, x, y) = 0\), then form should be a function that maps \(u, x, y\) to \(B(u, x, y)\).
    • For a 2D steady-state system, if the boundary condition demands that \(B(u_i, x, y) = 0\), then form should be a function that maps \(u_1, u_2, ..., u_n, x, y\) to \(B(u_i, x, y)\).
    • For a 2D time-dependent problem, if the boundary condition demands that \(B(u, x, y) = 0\), then form should be a function that maps \(u, x, y, t\) to \(B(u_i, x, y)\).

    Basically the function signature of form should be the same as the pde function of the given temporal.Approximator.

  • points_generator – A generator that generates points on the boundary. It can be a temporal.generator_1dspatial, temporal.generator_2dspatial_segment, or a generator written by user.
class neurodiffeq.temporal.FirstOrderInitialCondition(u0)

Bases: object

A first order initial condition. It is used to initialize temporal.Approximators.

Parameters:u0 (callable) –

A function representing the initial condition. If we are solving for \(u\), then u0 is \(u\bigg|_{t=0}\). The input of the function depends on where it is used.

  • If it is used as the input for temporal.SingleNetworkApproximator1DSpatialTemporal, then u0 should map \(x\) to \(u(x, t)\bigg|_{t = 0}\).
  • If it is used as the input for temporal.SingleNetworkApproximator2DSpatialTemporal, then u0 should map \((x, y)\) to \(u(x, y, t)\bigg|_{t = 0}\).
class neurodiffeq.temporal.Monitor1DSpatialTemporal(check_on_x, check_on_t, check_every)

Bases: object

A monitor for 1D time-dependent problems.

class neurodiffeq.temporal.Monitor2DSpatial(check_on_x, check_on_y, check_every)

Bases: object

A Monitor for 2D steady-state problems

class neurodiffeq.temporal.Monitor2DSpatialTemporal(check_on_x, check_on_y, check_on_t, check_every)

Bases: object

A monitor for 2D time-dependent problems.

class neurodiffeq.temporal.MonitorMinimal(check_every)

Bases: object

A monitor that shows the loss function and custom metrics.

class neurodiffeq.temporal.SecondOrderInitialCondition(u0, u0dot)

Bases: object

A second order initial condition. It is used to initialize temporal.Approximators.

Parameters:
  • u0 (callable) –

    A function representing the initial condition. If we are solving for is \(u\), then u0 is \(u\bigg|_{t=0}\). The input of the function dependes on where it is used.

    • If it is used as the input for temporal.SingleNetworkApproximator1DSpatialTemporal, then u0 should map \(x\) to \(u(x, t)\bigg|_{t = 0}\).
    • If it is used as the input for temporal.SingleNetworkApproximator2DSpatialTemporal, then u0 should map \((x, y)\) to \(u(x, y, t)\bigg|_{t = 0}\).
  • u0dot (callable) –

    A function representing the initial derivative w.r.t. time. If we are solving for is \(u\), then u0dot is \(\dfrac{\partial u}{\partial t}\bigg|_{t=0}\). The input of the function depends on where it is used.

    • If it is used as the input for temporal.SingleNetworkApproximator1DSpatialTemporal, then u0 should map \(x\) to \(\dfrac{\partial u}{\partial t}\bigg|_{t = 0}\).
    • If it is used as the input for temporal.SingleNetworkApproximator2DSpatialTemporal, then u0 should map \((x, y)\) to \(\dfrac{\partial u}{\partial t}\bigg|_{t = 0}\).
class neurodiffeq.temporal.SingleNetworkApproximator1DSpatialTemporal(single_network, pde, initial_condition, boundary_conditions, boundary_strictness=1.0)

Bases: neurodiffeq.temporal.Approximator

An approximator to approximate the solution of a 1D time-dependent problem. The boundary condition will be enforced by a regularization term in the loss function and the initial condition will be enforced by transforming the output of the neural network.

Parameters:
  • single_network (torch.nn.Module) – A neural network with 2 input nodes (x, t) and 1 output node.
  • pde (function) – The PDE to solve. If the PDE is \(F(u, x, t) = 0\) then pde should be a function that maps \((u, x, t)\) to \(F(u, x, t)\).
  • initial_condition (temporal.FirstOrderInitialCondition) – A first order initial condition.
  • boundary_conditions (list[temporal.BoundaryCondition]) – A list of boundary conditions.
  • boundary_strictness (float) – The regularization parameter, defaults to 1. a larger regularization parameter enforces the boundary conditions more strictly.
class neurodiffeq.temporal.SingleNetworkApproximator2DSpatial(single_network, pde, boundary_conditions, boundary_strictness=1.0)

Bases: neurodiffeq.temporal.Approximator

An approximator to approximate the solution of a 2D steady-state problem. The boundary condition will be enforced by a regularization term in the loss function.

Parameters:
  • single_network (torch.nn.Module) – A neural network with 2 input nodes (x, y) and 1 output node.
  • pde (function) – The PDE to solve. If the PDE is \(F(u, x, y) = 0\) then pde should be a function that maps \((u, x, y)\) to \(F(u, x, y)\).
  • boundary_conditions (list[temporal.BoundaryCondition]) – A list of boundary conditions.
  • boundary_strictness (float) – The regularization parameter, defaults to 1. A larger regularization parameter enforces the boundary conditions more strictly.
class neurodiffeq.temporal.SingleNetworkApproximator2DSpatialSystem(single_network, pde, boundary_conditions, boundary_strictness=1.0)

Bases: neurodiffeq.temporal.Approximator

An approximator to approximate the solution of a 2D steady-state differential equation system. The boundary condition will be enforced by a regularization term in the loss function.

Parameters:
  • single_network (torch.nn.Module) – A neural network with 2 input nodes (x, y) and n output node (n is the number of dependent variables in the differential equation system)
  • pde (callable) – The PDE system to solve. If the PDE is \(F_i(u_1, u_2, ..., u_n, x, y) = 0\) where \(u_i\) is the i-th dependent variable, then pde should be a function that maps \((u_1, u_2, ..., u_n, x, y)\) to a list where the i-th entry is \(F_i(u_1, u_2, ..., u_n, x, y)\).
  • boundary_conditions (list[temporal.BoundaryCondition]) – A list of boundary conditions
  • boundary_strictness (float) – The regularization parameter, defaults to 1. a larger regularization parameter enforces the boundary conditions more strictly.
class neurodiffeq.temporal.SingleNetworkApproximator2DSpatialTemporal(single_network, pde, initial_condition, boundary_conditions, boundary_strictness=1.0)

Bases: neurodiffeq.temporal.Approximator

An approximator to approximate the solution of a 2D time-dependent problem. The boundary condition will be enforced by a regularization term in the loss function and the initial condition will be enforced by transforming the output of the neural network.

Parameters:
  • single_network (torch.nn.Module) – A neural network with 3 input nodes (x, y, t) and 1 output node.
  • pde (callable) – The PDE system to solve. If the PDE is \(F(u, x, y, t) = 0\) then pde should be a function that maps \((u, x, y, t)\) to \(F(u, x, y, t)\).
  • initial_condition (temporal.FirstOrderInitialCondition or temporal.SecondOrderInitialCondition) – A first order initial condition.
  • boundary_conditions (list[temporal.BoundaryCondition]) – A list of boundary conditions.
  • boundary_strictness (float) – The regularization parameter, defaults to 1. a larger regularization parameter enforces the boundary conditions more strictly.
neurodiffeq.temporal.generator_1dspatial(size, x_min, x_max, random=True)

Return a generator that generates 1D points range from x_min to x_max

Parameters:
  • size (int) – Number of points to generated when __next__ is invoked.
  • x_min (float) – Lower bound of x.
  • x_max (float) – Upper bound of x.
  • random (bool) –
    • If set to False, then return equally spaced points range from x_min to x_max.
    • If set to True then generate points randomly.

    Defaults to True.

neurodiffeq.temporal.generator_2dspatial_rectangle(size, x_min, x_max, y_min, y_max, random=True)

Return a generator that generates 2D points in a rectangle.

Parameters:
  • size (int) – Number of points to generated when __next__ is invoked.
  • start (tuple[float, float]) – The starting point of the line segment.
  • end (tuple[float, float]) – The ending point of the line segment.
  • random (bool) –
    • If set to False, then return eqally spaced points range from start to end.
    • If set to Rrue then generate points randomly.

    Defaults to True.

neurodiffeq.temporal.generator_2dspatial_segment(size, start, end, random=True)

Return a generator that generates 2D points in a line segment.

Parameters:
  • size (int) – Number of points to generated when __next__ is invoked.
  • x_min (float) – Lower bound of x.
  • x_max (float) – Upper bound of x.
  • y_min (float) – Lower bound of y.
  • y_max (float) – Upper bound of y.
  • random (bool) –
    • If set to False, then return a grid where the points are equally spaced in the x and y dimension.
    • If set to True then generate points randomly.

    Defaults to True.

neurodiffeq.temporal.generator_temporal(size, t_min, t_max, random=True)

Return a generator that generates 1D points range from t_min to t_max

Parameters:
  • size (int) – Number of points to generated when __next__ is invoked.
  • t_min (float) – Lower bound of t.
  • t_max (float) – Upper bound of t.
  • random (bool) –
    • If set to False, then return eqally spaced points range from t_min to t_max.
    • If set to True then generate points randomly.

    Defaults to True

neurodiffeq.function_basis

class neurodiffeq.function_basis.BasisOperator

Bases: abc.ABC

class neurodiffeq.function_basis.CustomBasis(fns)

Bases: neurodiffeq.function_basis.FunctionBasis

class neurodiffeq.function_basis.FourierLaplacian(max_degree=12)

Bases: neurodiffeq.function_basis.BasisOperator

A Laplacian operator (in polar coordinates) acting on \(\displaystyle\sum_{i} R_i(r)F(\phi)\) where \(F\) is a Fourier component

Parameters:max_degree (int) – highest degree for the fourier series
class neurodiffeq.function_basis.FunctionBasis

Bases: abc.ABC

class neurodiffeq.function_basis.HarmonicsLaplacian(max_degree=4)

Bases: neurodiffeq.function_basis.BasisOperator

Laplacian of spherical harmonics can be reduced in the following way. Using this method, we can avoid the \(\displaystyle \frac{1}{\sin \theta}\) singularity

\(\begin{aligned} &\nabla^{2} R_{l, m}(r) Y_{l,m}(\theta, \phi)\\ &=\left(\nabla_{r}^{2}+\nabla_{\theta}^{2}+\nabla_{\phi}^{2}\right)\left(R_{l, m}(r) Y_{l, m}(\theta, \phi)\right)\\ &=Y_{l, m} \nabla_{r}^{2} R_{l, m}+R_{l, m}\left(\left(\nabla_{\theta}^{2}+\nabla_{\phi}^{2}\right)Y_{l, m}\right)\\ &=Y_{l, m} \nabla_{r}^{2} R_{l, m}+R_{l, m} \frac{-l(l+1)}{r^{2}} Y_{l, m}\\ &=Y_{l, m}\left(\nabla_{r}^{2} R_{l, m}+\frac{-l(l+1)}{r^{2}} R_{l, m}\right) \end{aligned}\)

class neurodiffeq.function_basis.LegendreBasis(max_degree)

Bases: neurodiffeq.function_basis.FunctionBasis

class neurodiffeq.function_basis.RealFourierSeries(max_degree=12)

Bases: neurodiffeq.function_basis.FunctionBasis

Real Fourier Series.

Parameters:max_degree (int) – highest degree for the fourier series
class neurodiffeq.function_basis.RealSphericalHarmonics(max_degree=4)

Bases: neurodiffeq.function_basis.FunctionBasis

Spherical harmonics as a function basis

Parameters:max_degree (int) – highest degree (currently only supports l<=4) for the spherical harmonics_fn
neurodiffeq.function_basis.ZeroOrderSphericalHarmonics(max_degree=None, degrees=None)

Zonal harmonics (spherical harmonics with order=0)

Parameters:
  • max_degree (int) – highest degrees to be included; degrees will contain {0, 1, …, max_degree}; ignored if degrees is passed
  • degrees (list[int]) – a list of degrees to be used, must be nonnegative and unique; if passed, max_degrees will be ignored
neurodiffeq.function_basis.ZeroOrderSphericalHarmonicsLaplacian(max_degree=None, degrees=None)

Laplacian operator acting on coefficients of zonal harmonics (spherical harmonics with order=0)

Parameters:
  • max_degree (int) – highest degrees to be included; degrees will contain {0, 1, …, max_degree}; ignored if degrees is passed
  • degrees (list[int]) – a list of degrees to be used, must be nonnegative and unique; if passed, max_degrees will be ignored
class neurodiffeq.function_basis.ZonalSphericalHarmonics(max_degree=None, degrees=None)

Bases: neurodiffeq.function_basis.FunctionBasis

Zonal harmonics (spherical harmonics with order=0)

Parameters:
  • max_degree (int) – highest degrees to be included; degrees will contain {0, 1, …, max_degree}; ignored if degrees is passed
  • degrees (list[int]) – a list of degrees to be used, must be nonnegative and unique; if passed, max_degrees will be ignored
class neurodiffeq.function_basis.ZonalSphericalHarmonicsLaplacian(max_degree=None, degrees=None)

Bases: neurodiffeq.function_basis.BasisOperator

Laplacian operator acting on coefficients of zonal harmonics (spherical harmonics with order=0)

Parameters:
  • max_degree (int) – highest degrees to be included; degrees will contain {0, 1, …, max_degree}; ignored if degrees is passed
  • degrees (list[int]) – a list of degrees to be used, must be nonnegative and unique; if passed, max_degrees will be ignored

neurodiffeq.generators

This module contains atomic generator classes and useful tools to construct complex generators out of atomic ones

class neurodiffeq.generators.BaseGenerator

Bases: object

Base class for all generators; Children classes must implement a .get_examples method and a .size field.

class neurodiffeq.generators.BatchGenerator(generator, batch_size)

Bases: neurodiffeq.generators.BaseGenerator

A generator which caches samples and returns a single batch of the samples at a time

Parameters:
  • generator (BaseGenerator) – A generator used for getting (cached) examples.
  • batch_size (int) – Number of batches to be returned. It can be larger than size of generator, but inefficient if so.
class neurodiffeq.generators.ConcatGenerator(*generators)

Bases: neurodiffeq.generators.BaseGenerator

An concatenated generator for sampling points, whose get_examples() method returns the concatenated vector of the samples returned by its sub-generators.

Parameters:generators (Tuple[BaseGenerator]) – a sequence of sub-generators, must have a .size field and a .get_examples() method

Note

Not to be confused with EnsembleGenerator which returns all the samples of its sub-generators.

class neurodiffeq.generators.EnsembleGenerator(*generators)

Bases: neurodiffeq.generators.BaseGenerator

A generator for sampling points whose get_examples method returns all the samples of its sub-generators. All sub-generator must return tensors of the same shape. The number of tensors returned by each sub-generator can be different.

Parameters:generators (Tuple[BaseGenerator]) – a sequence of sub-generators, must have a .size field and a .get_examples() method

Note

Not to be confused with ConcatGenerator which returns the concatenated vector of samples returned by its sub-generators.

class neurodiffeq.generators.FilterGenerator(generator, filter_fn, size=None, update_size=True)

Bases: neurodiffeq.generators.BaseGenerator

A generator which applies some filtering before samples are returned

Parameters:
  • generator (BaseGenerator) – A generator used to generate samples to be filtered.
  • filter_fn (callable) – A filter to be applied on the sample vectors; maps a list of tensors to a mask tensor.
  • size (int) – Size to be used for self.size. If not given, this attribute is initialized to the size of generator.
  • update_size (bool) – Whether or not to update .size after each call of self.get_examples. Defaults to True.
class neurodiffeq.generators.Generator1D(size, t_min=0.0, t_max=1.0, method='uniform', noise_std=None)

Bases: neurodiffeq.generators.BaseGenerator

An example generator for generating 1-D training points.

Parameters:
  • size (int) – The number of points to generate each time get_examples is called.
  • t_min (float, optional) – The lower bound of the 1-D points generated, defaults to 0.0.
  • t_max (float, optional) – The upper boound of the 1-D points generated, defaults to 1.0.
  • method (str, optional) –

    The distribution of the 1-D points generated.

    • If set to ‘uniform’, the points will be drew from a uniform distribution Unif(t_min, t_max).
    • If set to ‘equally-spaced’, the points will be fixed to a set of linearly-spaced points that go from t_min to t_max.
    • If set to ‘equally-spaced-noisy’, a normal noise will be added to the previously mentioned set of points.
    • If set to ‘log-spaced’, the points will be fixed to a set of log-spaced points that go from t_min to t_max.
    • If set to ‘log-spaced-noisy’, a normal noise will be added to the previously mentioned set of points,
    • If set to ‘chebyshev1’ or ‘chebyshev’, the points are chebyshev nodes of the first kind over (t_min, t_max).
    • If set to ‘chebyshev2’, the points will be chebyshev nodes of the second kind over [t_min, t_max].

    defaults to ‘uniform’.

Raises:

ValueError – When provided with an unknown method.

class neurodiffeq.generators.Generator2D(grid=(10, 10), xy_min=(0.0, 0.0), xy_max=(1.0, 1.0), method='equally-spaced-noisy', xy_noise_std=None)

Bases: neurodiffeq.generators.BaseGenerator

An example generator for generating 2-D training points.

Parameters:
  • grid (tuple[int, int], optional) – The discretization of the 2 dimensions. If we want to generate points on a \(m \times n\) grid, then grid is (m, n). Defaults to (10, 10).
  • xy_min (tuple[float, float], optional) – The lower bound of 2 dimensions. If we only care about \(x \geq x_0\) and \(y \geq y_0\), then xy_min is (x_0, y_0). Defaults to (0.0, 0.0).
  • xy_max (tuple[float, float], optional) – The upper boound of 2 dimensions. If we only care about \(x \leq x_1\) and \(y \leq y_1\), then xy_min is (x_1, y_1). Defaults to (1.0, 1.0).
  • method (str, optional) –

    The distribution of the 2-D points generated.

    • If set to ‘equally-spaced’, the points will be fixed to the grid specified.
    • If set to ‘equally-spaced-noisy’, a normal noise will be added to the previously mentioned set of points.
    • If set to ‘chebyshev’ or ‘chebyshev1’, the points will be 2-D chebyshev points of the first kind.
    • If set to ‘chebyshev2’, the points will be 2-D chebyshev points of the second kind.

    Defaults to ‘equally-spaced-noisy’.

  • xy_noise_std (tuple[int, int], optional, defaults to None) – The standard deviation of the noise on the x and y dimension. If not specified, the default value will be (grid step size on x dimension / 4, grid step size on y dimension / 4).
Raises:

ValueError – When provided with an unknown method.

class neurodiffeq.generators.Generator3D(grid=(10, 10, 10), xyz_min=(0.0, 0.0, 0.0), xyz_max=(1.0, 1.0, 1.0), method='equally-spaced-noisy')

Bases: neurodiffeq.generators.BaseGenerator

An example generator for generating 3-D training points. NOT TO BE CONFUSED with GeneratorSpherical

Parameters:
  • grid (tuple[int, int, int], optional) – The discretization of the 3 dimensions. If we want to generate points on a \(m \times n \times k\) grid, then grid is (m, n, k), defaults to (10, 10, 10).
  • xyz_min (tuple[float, float, float], optional) – The lower bound of 3 dimensions. If we only care about \(x \geq x_0\), \(y \geq y_0\), and \(z \geq z_0\) then xyz_min is \((x_0, y_0, z_0)\). Defaults to (0.0, 0.0, 0.0).
  • xyz_max (tuple[float, float, float], optional) – The upper bound of 3 dimensions. If we only care about \(x \leq x_1\), \(y \leq y_1\), i and \(z \leq z_1\) then xyz_max is \((x_1, y_1, z_1)\). Defaults to (1.0, 1.0, 1.0).
  • method (str, optional) –

    The distribution of the 3-D points generated.

    • If set to ‘equally-spaced’, the points will be fixed to the grid specified.
    • If set to ‘equally-spaced-noisy’, a normal noise will be added to the previously mentioned set of points.
    • If set to ‘chebyshev’ or ‘chebyshev1’, the points will be 3-D chebyshev points of the first kind.
    • If set to ‘chebyshev2’, the points will be 3-D chebyshev points of the second kind.

    Defaults to ‘equally-spaced-noisy’.

Raises:

ValueError – When provided with an unknown method.

class neurodiffeq.generators.GeneratorND(grid=(10, 10), r_min=(0.0, 0.0), r_max=(1.0, 1.0), methods=['equally-spaced', 'equally-spaced'], noisy=True, r_noise_std=None, **kwargs)

Bases: neurodiffeq.generators.BaseGenerator

An example generator for generating N-D training points.

Parameters:
  • grid (tuple[int, int, .. , int], or it can be int if N=1, optional) – The discretization of the N dimensions. If we want to generate points on a \(n_1 \times n_2 \times ... \times n_N\) grid, then grid is (n_1, n_2, … , n_N). Defaults to (10, 10).
  • r_min (tuple[float, .. , float], or it can be float if N=1, optional) – The lower bound of N dimensions. If we only care about \(r_1 \geq r_1^{min}\), \(r_2 \geq r_2^{min}\), … , and \(r_N \geq r_N^{min}\) then r_min is (r_1_min, r_2_min, … , r_N_min). Defaults to (0.0, 0.0).
  • r_max (tuple[float, .. , float], or it can be float if N=1, optional) – The upper boound of N dimensions. If we only care about \(r_1 \leq r_1^{max}\), \(r_2 \leq r_2^{max}\), … , and \(r_N \leq r_N^{max}\) then r_max is (r_1_max, r_2_max, … , r_N_max). Defaults to (1.0, 1.0).
  • methods (list[str, str, .. , str], or it can be str if N=1, optional) –

    The a list of the distributions of each of the 1-D points generated that make the total N-D points.

    • If set to ‘uniform’, the points will be drew from a uniform distribution Unif(r_min[i], r_max[i]).
    • If set to ‘equally-spaced’, the points will be fixed to a set of linearly-spaced points that go from r_min[i] to r_max[i].
    • If set to ‘log-spaced’, the points will be fixed to a set of log-spaced points that go from r_min[i] to r_max[i].
    • If set to ‘exp-spaced’, the points will be fixed to a set of exp-spaced points that go from r_min[i] to r_max[i].
    • If set to ‘chebyshev’ or ‘chebyshev1’, the points will be chebyshev points of the first kind that go from r_min[i] to r_max[i].
    • If set to ‘chebyshev2’, the points will be chebyshev points of the second kind that go from r_min[i] to r_max[i].

    Defaults to [‘equally-spaced’, ‘equally-spaced’].

  • noisy (bool) – if set to True a normal noise will be added to all of the N sets of points that make the generator. Defaults to True.
Raises:

ValueError – When provided with unknown methods.

class neurodiffeq.generators.GeneratorSpherical(size, r_min=0.0, r_max=1.0, method='equally-spaced-noisy')

Bases: neurodiffeq.generators.BaseGenerator

A generator for generating points in spherical coordinates.

Parameters:
  • size (int) – Number of points in 3-D sphere.
  • r_min (float, optional) – Radius of the interior boundary.
  • r_max (float, optional) – Radius of the exterior boundary.
  • method (str, optional) –

    The distribution of the 3-D points generated.

    • If set to ‘equally-radius-noisy’, radius of the points will be drawn from a uniform distribution \(r \sim U[r_{min}, r_{max}]\).
    • If set to ‘equally-spaced-noisy’, squared radius of the points will be drawn from a uniform distribution \(r^2 \sim U[r_{min}^2, r_{max}^2]\)

    Defaults to ‘equally-spaced-noisy’.

Note

Not to be confused with Generator3D.

class neurodiffeq.generators.MeshGenerator(*generators)

Bases: neurodiffeq.generators.BaseGenerator

A generator for sampling points whose get_examples method returns a mesh of the samples of its sub-generators. All sub-generators must return tensors of the same shape, or a tuple of tensors of the same shape. The number of tensors returned by each sub-generator can be different, but the intent behind this class is to create an N dimensional generator from several 1 dimensional generators, so each input generator should represents one of the dimensions of your problem. An exception is made for using a MeshGenerator as one of the inputs of another MeshGenerator. In that case the original meshed generators are extracted from the input MeshGenerator, and the final mesh is used using those (e.g MeshGenerator(MeshGenerator(g1, g2), g3) is equivalent to MeshGenerator(g1, g2, g3), where g1, g2 and g3 are Generator1D). This is done to make the use of the ^ infix consistent with the use of the MeshGenerator class itself (e.g MeshGenerator(g1, g2, g3) is equivalent to g1 ^ g2 ^ g3), where g1, g2 and g3 are Generator1D).

Parameters:generators (Tuple[BaseGenerator]) – a sequence of sub-generators, must have a .size field and a .get_examples() method
class neurodiffeq.generators.PredefinedGenerator(*xs)

Bases: neurodiffeq.generators.BaseGenerator

A generator for generating points that are fixed and predefined.

Parameters:xs (Tuple[torch.Tensor]) – training points that will be returned
get_examples()

Returns the training points. Points are fixed and predefined.

Returns:The predefined training points
Return type:tuple[torch.Tensor]
class neurodiffeq.generators.ResampleGenerator(generator, size=None, replacement=False)

Bases: neurodiffeq.generators.BaseGenerator

A generator whose output is shuffled and resampled every time

Parameters:
  • generator (BaseGenerator) – A generator used to generate samples to be shuffled and resampled.
  • size (int) – Size of the shuffled output. Defaults to the size of generator.
  • replacement (bool) – Whether to sample with replacement or not. Defaults to False.
class neurodiffeq.generators.SamplerGenerator(generator)

Bases: neurodiffeq.generators.BaseGenerator

class neurodiffeq.generators.StaticGenerator(generator)

Bases: neurodiffeq.generators.BaseGenerator

A generator that returns the same samples every time. The static samples are obtained by the sub-generator at instantiation time.

Parameters:generator (BaseGenerator) – a generator used to generate the static samples
class neurodiffeq.generators.TransformGenerator(generator, transforms=None, transform=None)

Bases: neurodiffeq.generators.BaseGenerator

A generator which applies certain transformations on the sample vectors.

Parameters:
  • generator (BaseGenerator) – A generator used to generate samples on which transformations will be applied.
  • transforms (list[callable]) – A list of transformations to be applied on the sample vectors. Identity transformation can be replaced with None
  • transform (callable) – A callable that transforms the output(s) of base generator to another (tuple of) coordinate(s).

neurodiffeq.operators

neurodiffeq.operators.cartesian_to_cylindrical(x, y, z)

Convert cartesian coordinates \((x, y, z)\) to cylindrical coordinate \((\rho, \phi, z)\). The input shapes of x, y, and z must be the same. If the azimuthal angle \(phi\) is undefined, the default value will be 0.

Parameters:
  • x (torch.Tensor) – The \(x\)-component of cartesian coordinates.
  • y (torch.Tensor) – The \(y\)-component of cartesian coordinates.
  • z (torch.Tensor) – The \(z\)-component of cartesian coordinates.
Returns:

The \(\rho\)-, \(\phi\)-, and \(z\)-component in cylindrical coordinates.

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.cartesian_to_spherical(x, y, z)

Convert cartesian coordinates \((x, y, z)\) to spherical coordinate \((r, \theta, \phi)\). The input shapes of x, y, and z must be the same. If either the polar angle \(\theta\) or the azimuthal angle \(phi\) is not defined, the default value will be 0.

Parameters:
  • x (torch.Tensor) – The \(x\)-component of cartesian coordinates.
  • y (torch.Tensor) – The \(y\)-component of cartesian coordinates.
  • z (torch.Tensor) – The \(z\)-component of cartesian coordinates.
Returns:

The \(r\)-, \(\theta\)-, and \(\phi\)-component in spherical coordinates.

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.curl(u_x, u_y, u_z, x, y, z)

Derives and evaluates the curl of a vector field \(\mathbf{u}\) in three dimensional cartesian coordinates.

Parameters:
  • u_x (torch.Tensor) – The \(x\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_y (torch.Tensor) – The \(y\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_z (torch.Tensor) – The \(z\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • x (torch.Tensor) – A vector of \(x\)-coordinate values, must have shape (n_samples, 1).
  • y (torch.Tensor) – A vector of \(y\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

The \(x\), \(y\), and \(z\) components of the curl, each with shape (n_samples, 1).

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.cylindrical_curl(u_rho, u_phi, u_z, rho, phi, z)

Derives and evaluates the cylindrical curl of a cylindrical vector field \(u\).

Parameters:
  • u_rho (torch.Tensor) – The \(\rho\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_phi (torch.Tensor) – The \(\phi\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_z (torch.Tensor) – The \(z\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • rho (torch.Tensor) – A vector of \(\rho\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

The \(\rho\), \(\phi\), and \(z\) components of the curl, each with shape (n_samples, 1).

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.cylindrical_div(u_rho, u_phi, u_z, rho, phi, z)

Derives and evaluates the cylindrical divergence of a cylindrical vector field \(u\).

Parameters:
  • u_rho (torch.Tensor) – The \(\rho\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_phi (torch.Tensor) – The \(\phi\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_z (torch.Tensor) – The \(z\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • rho (torch.Tensor) – A vector of \(\rho\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

The divergence evaluated at \((\rho, \phi, z)\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.cylindrical_grad(u, rho, phi, z)

Derives and evaluates the cylindrical gradient of a cylindrical scalar field \(u\).

Parameters:
  • u (torch.Tensor) – A scalar field \(u\), must have shape (n_samples, 1).
  • rho (torch.Tensor) – A vector of \(\rho\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

The \(\rho\), \(\phi\), and \(z\) components of the gradient, each with shape (n_samples, 1).

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.cylindrical_laplacian(u, rho, phi, z)

Derives and evaluates the cylindrical laplacian of a cylindrical scalar field \(u\).

Parameters:
  • u (torch.Tensor) – A scalar field \(u\), must have shape (n_samples, 1).
  • rho (torch.Tensor) – A vector of \(\rho\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

The laplacian evaluated at \((\rho, \phi, z)\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.cylindrical_to_cartesian(rho, phi, z)

Convert cylindrical coordinate \((\rho, \phi, z)\) to cartesian coordinates \((x, y, z)\). The input shapes of rho, phi, and z must be the same.

Parameters:
  • rho (torch.Tensor) – The \(\rho\)-component of cylindrical coordinates.
  • phi (torch.Tensor) – The \(\phi\)-component (azimuthal angle) of cylindrical coordinates.
  • z (torch.Tensor) – The \(z\)-component of cylindrical coordinates.
Returns:

The \(x\)-, \(y\)-, and \(z\)-component in cartesian coordinates.

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.cylindrical_vector_laplacian(u_rho, u_phi, u_z, rho, phi, z)

Derives and evaluates the cylindrical laplacian of a cylindrical vector field \(u\).

Parameters:
  • u_rho (torch.Tensor) – The \(\rho\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_phi (torch.Tensor) – The \(\phi\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_z (torch.Tensor) – The \(z\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • rho (torch.Tensor) – A vector of \(\rho\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

The laplacian evaluated at \((\rho, \phi, z)\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.div(*us_xs)

Derives and evaluates the divergence of a \(n\)-dimensional vector field \(\mathbf{u}\) with respect to \(\mathbf{x}\).

Parameters:us_xs (torch.Tensor) – The input must have \(2n\) tensors, each of shape (n_samples, 1) with the former \(n\) tensors being the entries of \(u\) and the latter \(n\) tensors being the entries of \(x\).
Returns:The divergence evaluated at \(x\), with shape (n_samples, 1).
Return type:torch.Tensor
neurodiffeq.operators.grad(u, *xs)

Gradient of tensor u with respect to a tuple of tensors xs. Given \(u\) and \(x_1\), …, \(x_n\), the function returns \(\frac{\partial u}{\partial x_1}\), …, \(\frac{\partial u}{\partial x_n}\)

Parameters:
  • u (torch.Tensor) – The \(u\) described above.
  • xs (torch.Tensor) – The sequence of \(x_i\) described above.
Returns:

A tuple of \(\frac{\partial u}{\partial x_1}\), …, \(\frac{\partial u}{\partial x_n}\)

Return type:

List[torch.Tensor]

neurodiffeq.operators.laplacian(u, *xs)

Derives and evaluates the laplacian of a scalar field \(u\) with respect to \(\mathbf{x}=[x_1, x_2, \dots]\)

Parameters:
  • u (torch.Tensor) – A scalar field \(u\), must have shape (n_samples, 1).
  • xs (torch.Tensor) – The sequence of \(x_i\) described above. Each with shape (n_samples, 1)
Returns:

The laplacian of \(u\) evaluated at \(\mathbf{x}\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.spherical_curl(u_r, u_theta, u_phi, r, theta, phi)

Derives and evaluates the spherical curl of a spherical vector field \(u\).

Parameters:
  • u_r (torch.Tensor) – The \(r\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_theta (torch.Tensor) – The \(\theta\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_phi (torch.Tensor) – The \(\phi\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • r (torch.Tensor) – A vector of \(r\)-coordinate values, must have shape (n_samples, 1).
  • theta (torch.Tensor) – A vector of \(\theta\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
Returns:

The \(r\), \(\theta\), and \(\phi\) components of the curl, each with shape (n_samples, 1).

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.spherical_div(u_r, u_theta, u_phi, r, theta, phi)

Derives and evaluates the spherical divergence of a spherical vector field \(u\).

Parameters:
  • u_r (torch.Tensor) – The \(r\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_theta (torch.Tensor) – The \(\theta\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_phi (torch.Tensor) – The \(\phi\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • r (torch.Tensor) – A vector of \(r\)-coordinate values, must have shape (n_samples, 1).
  • theta (torch.Tensor) – A vector of \(\theta\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
Returns:

The divergence evaluated at \((r, \theta, \phi)\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.spherical_grad(u, r, theta, phi)

Derives and evaluates the spherical gradient of a spherical scalar field \(u\).

Parameters:
  • u (torch.Tensor) – A scalar field \(u\), must have shape (n_samples, 1).
  • r (torch.Tensor) – A vector of \(r\)-coordinate values, must have shape (n_samples, 1).
  • theta (torch.Tensor) – A vector of \(\theta\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
Returns:

The \(r\), \(\theta\), and \(\phi\) components of the gradient, each with shape (n_samples, 1).

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.spherical_laplacian(u, r, theta, phi)

Derives and evaluates the spherical laplacian of a spherical scalar field \(u\).

Parameters:
  • u (torch.Tensor) – A scalar field \(u\), must have shape (n_samples, 1).
  • r (torch.Tensor) – A vector of \(r\)-coordinate values, must have shape (n_samples, 1).
  • theta (torch.Tensor) – A vector of \(\theta\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
Returns:

The laplacian evaluated at \((r, \theta, \phi)\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.spherical_to_cartesian(r, theta, phi)

Convert spherical coordinate \((r, \theta, \phi)\) to cartesian coordinates \((x, y, z)\). The input shapes of r, theta, and phi must be the same.

Parameters:
  • r (torch.Tensor) – The \(r\)-component of spherical coordinates.
  • theta (torch.Tensor) – The \(\theta\)-component (polar angle) of spherical coordinates.
  • phi (torch.Tensor) – The \(\phi\)-component (azimuthal angle) of spherical coordinates.
Returns:

The \(x\)-, \(y\)-, and \(z\)-component in cartesian coordinates.

Return type:

tuple[torch.Tensor]

neurodiffeq.operators.spherical_vector_laplacian(u_r, u_theta, u_phi, r, theta, phi)

Derives and evaluates the spherical laplacian of a spherical vector field \(u\).

Parameters:
  • u_r (torch.Tensor) – The \(r\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_theta (torch.Tensor) – The \(\theta\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_phi (torch.Tensor) – The \(\phi\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • r (torch.Tensor) – A vector of \(r\)-coordinate values, must have shape (n_samples, 1).
  • theta (torch.Tensor) – A vector of \(\theta\)-coordinate values, must have shape (n_samples, 1).
  • phi (torch.Tensor) – A vector of \(\phi\)-coordinate values, must have shape (n_samples, 1).
Returns:

The laplacian evaluated at \((r, \theta, \phi)\), with shape (n_samples, 1).

Return type:

torch.Tensor

neurodiffeq.operators.vector_laplacian(u_x, u_y, u_z, x, y, z)

Derives and evaluates the vector laplacian of a vector field \(\mathbf{u}\) in three dimensional cartesian coordinates.

Parameters:
  • u_x (torch.Tensor) – The \(x\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_y (torch.Tensor) – The \(y\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • u_z (torch.Tensor) – The \(z\)-component of the vector field \(u\), must have shape (n_samples, 1).
  • x (torch.Tensor) – A vector of \(x\)-coordinate values, must have shape (n_samples, 1).
  • y (torch.Tensor) – A vector of \(y\)-coordinate values, must have shape (n_samples, 1).
  • z (torch.Tensor) – A vector of \(z\)-coordinate values, must have shape (n_samples, 1).
Returns:

Components of vector laplacian of \(\mathbf{u}\) evaluated at \(\mathbf{x}\), each with shape (n_samples, 1).

Return type:

tuple[torch.Tensor]

neurodiffeq.callbacks

class neurodiffeq.callbacks.ActionCallback(logger=None)

Bases: neurodiffeq.callbacks.BaseCallback

Base class of action callbacks. Custom callbacks that performs an action should subclass this class.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.AndCallback(condition_callbacks, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True iff none of its sub-ConditionCallback s evaluates to False.

Parameters:
  • condition_callbacks (list[ConditionCallback]) – List of sub-ConditionCallback s.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.

Note

c = AndCallback([c1, c2, c3]) can be simplified as c = c1 & c2 & c3.

class neurodiffeq.callbacks.BaseCallback(logger=None)

Bases: abc.ABC, neurodiffeq.callbacks._LoggerMixin

Base class of all callbacks. The class should not be directly subclassed. Instead, subclass ActionCallback or ConditionCallback.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.CheckpointCallback(ckpt_dir, logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that saves the networks (and their weights) to the disk.

Parameters:
  • ckpt_dir (str) – The directory to save model checkpoints. If non-existent, the directory is automatically created at instantiation time.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.

Note

Unless the callback is called twice within the same second, new checkpoints will not overwrite existing ones.

class neurodiffeq.callbacks.ClosedIntervalGlobal(min=None, max=None, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only when \(g_0 \leq g \leq g_1\), where \(g\) is the global epoch count.

Parameters:
  • min (int) – Lower bound of the closed interval (\(g_0\) in the above inequality). Defaults to None.
  • max (int) – Upper bound of the closed interval (\(g_1\) in the above inequality). Defaults to None.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.ClosedIntervalLocal(min=None, max=None, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only when \(l_0 \leq l \leq l_1\), where \(l\) is the local epoch count.

Parameters:
  • min (int) – Lower bound of the closed interval (\(l_0\) in the above inequality). Defaults to None.
  • max (int) – Upper bound of the closed interval (\(l_1\) in the above inequality). Defaults to None.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.ConditionCallback(logger=None)

Bases: neurodiffeq.callbacks.BaseCallback

Base class of condition callbacks. Custom callbacks that determines whether some action shall be performed should subclass this class and overwrite the .condition method.

Instances of ConditionCallback (and its children classes) support (short-circuit) evaluation of common boolean operations: & (and), | (or), ~ (not), and ^ (xor).

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.EveCallback(base_value=1.0, double_at=0.1, n_0=1, n_max=None, use_train=True, metric='loss', logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that readjusts the number of batches for training based on latest value of a specified metric. The number of batches will be \(\displaystyle{\left(n_0 \cdot 2^k\right)}\) or \(n_\mathrm{max}\) (if specified), whichever is lower, where \(\displaystyle{k=\max\left(0,\left\lfloor\log_p{\frac{v}{v_0}}\right\rfloor\right)}\) and \(v\) is the value of the metric in the last epoch.

Parameters:
  • base_value (float) – Base value of the specified metric (\(v_0\) in the above equation). When the metric value is higher than base_value, number of batches will be \(n_0\).
  • double_at (float) – The ratio at which the batch number will be doubled (\(p\) in the above equation). When \(\displaystyle{\frac{v}{v_0}=p^k}\), the number of batches will be \(\displaystyle{\left(n_0 \cdot 2^k\right)}\).
  • n_0 (int) – Minimum number of batches (\(n_0\)). Defaults to 1.
  • n_max (int) – Maximum number of batches (\(n_\mathrm{max}\)). Defaults to infinity.
  • use_train (bool) – Whether to use the training (instead of validation) phase value of the metric. Defaults to True.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.FalseCallback(logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which always evaluates to False.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.MonitorCallback(monitor, fig_dir=None, format=None, logger=None, **kwargs)

Bases: neurodiffeq.callbacks.ActionCallback

A callback for updating the monitor plots (and optionally saving the fig to disk).

Parameters:
  • monitor (neurodiffeq.monitors.BaseMonitor) – The underlying monitor responsible for plotting solutions.
  • fig_dir (str) – Directory for saving monitor figs; if not specified, figs will not be saved.
  • format (str) – Format for saving figures: {‘jpg’, ‘png’ (default), …}.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.NotCallback(condition_callback, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True iff its sub-ConditionCallback evaluates to False.

Parameters:
  • condition_callback (ConditionCallback) – The sub-ConditionCallback .
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.

Note

c = NotCallback(c1) can be simplified as c = ~c1.

class neurodiffeq.callbacks.OnFirstGlobal(logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only on the first global epoch.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.OnFirstLocal(logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only on the first local epoch.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.OnLastLocal(logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only on the last local epoch.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.OrCallback(condition_callbacks, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to False iff none of its sub-ConditionCallback s evaluates to True.

Parameters:
  • condition_callbacks (list[ConditionCallback]) – List of sub-ConditionCallback s.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.

Note

c = OrCallback([c1, c2, c3]) can be simplified as c = c1 | c2 | c3.

class neurodiffeq.callbacks.PeriodGlobal(period, offset=0, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only when the global epoch count equals \(\mathrm{period}\times n + \mathrm{offset}\).

Parameters:
  • period (int) – Period of the callback.
  • offset (int) – Offset of the period. Defaults to 0.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.PeriodLocal(period, offset=0, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to True only when the local epoch count equals \(\mathrm{period}\times n + \mathrm{offset}\).

Parameters:
  • period (int) – Period of the callback.
  • offset (int) – Offset of the period. Defaults to 0.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.ProgressBarCallBack(logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

class neurodiffeq.callbacks.Random(probability, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which has a certain probability of evaluating to True.

Parameters:
  • probability (float) – The probability of this callback evaluating to True (between 0 and 1).
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.RepeatedMetricAbove(threshold, use_train, metric, repetition, logger)

Bases: neurodiffeq.callbacks._RepeatedMetricChange

A ConditionCallback which evaluates to True if a certain metric has been greater than a given value \(v\) for the latest \(n\) epochs.

Parameters:
  • threshold (float) – The value v.
  • use_train (bool) – Whether to use the metric value in the training (rather than validation) phase.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • repetition (int) – Number of times the metric should diverge beyond said gap.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.RepeatedMetricBelow(threshold, use_train, metric, repetition, logger)

Bases: neurodiffeq.callbacks._RepeatedMetricChange

A ConditionCallback which evaluates to True if a certain metric has been less than a given value \(v\) for the latest \(n\) epochs.

Parameters:
  • threshold (float) – The value v.
  • use_train (bool) – Whether to use the metric value in the training (rather than validation) phase.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • repetition (int) – Number of times the metric should diverge beyond said gap.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.RepeatedMetricConverge(epsilon, use_train=True, metric='loss', repetition=1, logger=None)

Bases: neurodiffeq.callbacks._RepeatedMetricChange

A ConditionCallback which evaluates to True if a certain metric for the latest \(n\) epochs kept converging within some tolerance \(\varepsilon\).

Parameters:
  • epsilon (float) – The said tolerance.
  • use_train (bool) – Whether to use the metric value in the training (rather than validation) phase.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • repetition (int) – Number of times the metric should converge within said tolerance.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.RepeatedMetricDiverge(gap, use_train=True, metric='loss', repetition=1, logger=None)

Bases: neurodiffeq.callbacks._RepeatedMetricChange

A ConditionCallback which evaluates to True if a certain metric for the latest \(n\) epochs kept diverging beyond some gap.

Parameters:
  • gap (float) – The said gap.
  • use_train (bool) – Whether to use the metric value in the training (rather than validation) phase.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • repetition (int) – Number of times the metric should diverge beyond said gap.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.RepeatedMetricDown(at_least_by=0.0, use_train=True, metric='loss', repetition=1, logger=None)

Bases: neurodiffeq.callbacks._RepeatedMetricChange

A ConditionCallback which evaluates to True if a certain metric for the latest \(n\) epochs kept decreasing by at least some margin.

Parameters:
  • at_least_by (float) – The said margin.
  • use_train (bool) – Whether to use the metric value in the training (rather than validation) phase.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • repetition (int) – Number of times the metric should decrease by the said margin (the \(n\)).
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.RepeatedMetricUp(at_least_by=0.0, use_train=True, metric='loss', repetition=1, logger=None)

Bases: neurodiffeq.callbacks._RepeatedMetricChange

A ConditionCallback which evaluates to True if a certain metric for the latest \(n\) epochs kept increasing by at least some margin.

Parameters:
  • at_least_by (float) – The said margin.
  • use_train (bool) – Whether to use the metric value in the training (rather than validation) phase.
  • metric (str) – Name of which metric to use. Must be ‘loss’ or present in solver.metrics_fn.keys(). Defaults to ‘loss’.
  • repetition (int) – Number of times the metric should increase by the said margin (the \(n\)).
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.ReportCallback(logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that logs the training/validation information, including

  • number of batches (train/valid)
  • batch size (train/valid)
  • generator to be used (train/valid)
Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
neurodiffeq.callbacks.ReportOnFitCallback(logger=None)

A callback that logs the training/validation information, including

  • number of batches (train/valid)
  • batch size (train/valid)
  • generator to be used (train/valid)
Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
neurodiffeq.callbacks.SetCriterion(loss_fn, reset=False, logger=None)

A callback that sets the criterion (a.k.a. loss function) of the solver. Best used together with a condition callback.

Parameters:
  • loss_fn (torch.nn.modules.loss._Loss or callable or str.) –

    The loss function to be set for the solver. It can be

    • An instance of torch.nn.modules.loss._Loss which computes loss of the PDE/ODE residuals against a zero tensor.
    • A callable object which maps residuals, function values, and input coordinates to a scalar loss; or
    • A str which is present in neurodiffeq.losses._losses.keys().
  • reset (bool) – If True, the criterion will be reset every time the callback is called. Otherwise, the criterion will only be set once. Defaults to False.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.SetLossFn(loss_fn, reset=False, logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that sets the criterion (a.k.a. loss function) of the solver. Best used together with a condition callback.

Parameters:
  • loss_fn (torch.nn.modules.loss._Loss or callable or str.) –

    The loss function to be set for the solver. It can be

    • An instance of torch.nn.modules.loss._Loss which computes loss of the PDE/ODE residuals against a zero tensor.
    • A callable object which maps residuals, function values, and input coordinates to a scalar loss; or
    • A str which is present in neurodiffeq.losses._losses.keys().
  • reset (bool) – If True, the criterion will be reset every time the callback is called. Otherwise, the criterion will only be set once. Defaults to False.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.SetOptimizer(optimizer, optimizer_args=None, optimizer_kwargs=None, reset=False, logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that sets the optimizer of the solver. Best used together with a condition callback.

  • If an optimizer instance is passed, it must contain a sequence of parameters to be updated.
  • If an optimizer subclass is passed, optimizer_args and optimizer_kwargs can be supplied.
Parameters:
  • optimizer (type or torch.optim.Optimizer) – Optimizer instance (or its class) to be set.
  • optimizer_args (tuple) – Positional arguments to be passed to the optimizer constructor in addition to the parameter sequence. Ignored if optimizer is an instance (instead of a class).
  • optimizer_kwargs (dict) – Keyword arguments to be passed to the optimizer constructor in addition to the parameter sequence. Ignored if optimizer is an instance (instead of a class).
  • reset (bool) – If True, the optimizer will be reset every time the callback is called. Otherwise, the optimizer will only be set once. Defaults to False.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.SimpleTensorboardCallback(writer=None, logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that writes all metric values to the disk for TensorBoard to plot. Tensorboard must be installed for this callback to work.

Parameters:
  • writer (torch.utils.tensorboard.SummaryWriter) – The summary writer for writing values to disk. Defaults to a new SummaryWriter instance created with default kwargs.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.StopCallback(logger=None)

Bases: neurodiffeq.callbacks.ActionCallback

A callback that stops the training/validation process and terminates the solver.fit() call.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.

Note

This callback should always be used together with a ConditionCallback, otherwise the solver.fit() call will exit after first epoch.

class neurodiffeq.callbacks.TrueCallback(logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which always evaluates to True.

Parameters:logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.
class neurodiffeq.callbacks.XorCallback(condition_callbacks, logger=None)

Bases: neurodiffeq.callbacks.ConditionCallback

A ConditionCallback which evaluates to False iff evenly many of its sub-ConditionCallback s evaluates to True.

Parameters:
  • condition_callbacks (list[ConditionCallback]) – List of sub-ConditionCallback s.
  • logger (str or logging.Logger) – The logger (or its name) to be used for this callback. Defaults to the ‘root’ logger.

Note

c = XorCallback([c1, c2, c3]) can be simplified as c = c1 ^ c2 ^ c3.

neurodiffeq.utils

neurodiffeq.utils.set_seed(seed_value, ignore_numpy=False, ignore_torch=False, ignore_random=False)

Set the random seed for the numpy, torch, and random packages.

Parameters:
  • seed_value (int) – The value of seed.
  • ignore_numpy (bool) – If True, the seed for numpy.random will not be set. Defaults to False.
  • ignore_torch (bool) – If True, the seed for torch will not be set. Defaults to False.
  • ignore_random (bool) – If True, the seed for random will not be set. Defaults to False.
neurodiffeq.utils.set_tensor_type(device=None, float_bits=32)

Set the default torch tensor type to be used with neurodiffeq.

Parameters:
  • device (str) – Either “cpu” or “cuda” (“gpu”); defaults to “cuda” if available.
  • float_bits (int) – Length of float numbers. Either 32 (float) or 64 (double); defaults to 32.