SolidWorks Videos

Tuesday, August 6, 2024

Teaching Navier-Stokes Equations to Sand: Flow-Field Reconstruction using Physics Informed Neural Network "Includes Free Code"

     This post is about training a ⚛ Physics Informed Neural 🧠 Network (PINN) to solve the Navier-Stokes equations with limited data πŸ–§ availability. Why is the data necessary? Because, the Navier-Stokes equations being the first principle in fluid dynamics 🌬 are extremely nonlinear and highly coupled πŸͺ’. As of writing this post, it is not possible to find analytical solutions to these equations for any real-world 🌏 scenarios. This has resulted in attempts to solve these glorious 🐐 equations using numerical methods πŸ–₯ by several (crazy) researchers πŸ§‘‍🏫 including yours truly (read more here). Further more, PINNs are particularly advantageous in scenarios where sparse or discrete experimental data is available, such as measurements from a wind tunnel, and PINN can be used to create the complete flow field and visualize other physical quantities that are not directly measurable πŸ“!

     Well, recently even more crazy researchers πŸ§‘‍🏫; including yours truly again (read more here); have resorted to model the solutions to these wonderful equations using πŸ–³ machine learning i.e. neural networks. As a hobby, while experimenting with neural networks in past few years; yours truly has learned that the vanilla Data-Less PINN (free code here) approach only works for very easy and extremely impractical and small Reynolds numbers cases. For these low Reynolds numbers, analytical solutions to the Navier-Stokes equations already exist. For complex cases of fluid flow such as turbulent flow, the loss "domain" becomes too complicated and the neural network gets stuck in local minima instead of moving towards global minima.

     Therefore, in this post, PINN with limited data availability is presented. The cases considered are the cases of asymmetric lid-driven cavity, backwards-facing step, flow around square cylinder at much higher Reynolds numbers as compared to presented in the previous post, and impinging jet.

     The idea is to add the training data term in the loss function to guide the PINN loss away from the local minima. Even if a model is made using the so called πŸ’© "dirty" data that is used in the following examples in conjunction with the Navier-Stokes πŸƒ equations, the trained model will accurately depict the fluid flow properties due to the inclusion of Navier-Stokes equations in the loss function. This is why this type of machine learning 🧠 model is better than most regression πŸ“ˆ and interpolation methods and models. PINNs are also robust against noise in the data. The first principles embedded in the loss function act as regularization and reduce the impact of noise in the data compared to other methods.  

External Aerodynamics

     In this example of flow around a square cylinder, CFD (Computational Fluid Dynamics) solution from from finite difference method code yours truly developed is used for training data. The CFD performed using the highest possible CFL number and largest possible grid size. This done to make the training data for the neural network as bad as possible. The results from CFD and PINN are compared in Fig. 1. The Reynolds number is 1000. This case is done to test the theory that this method works. The results are in good agreement with solutions at this Reynolds number available in the literature. This is how PINNs can be used to reveal important features of the partially available flow-field from experiments or from very coarse CFD simulations.

Fig. 1, Training data VS reconstructed flow-field

     The second example is of flow around a backwards facing step. As is the case with square cylinder, the data is very coarse. The same code can be used for backwards facing step, just move the square's location. The Reynold's number for this case is low, i.e. at 200. The results are shown in Fig. 2.

Fig. 2, Training data VS reconstructed flow-field


Code

     Here is the PINN code to reproduce Fig. 1. The data from the wind tunnel or CFD has to be in .npy files or in .csv files.

#Copyright <2024> <Fahad Butt>
#Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

#%% import necessary libraries
import os
import numpy as np
import random as rn
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.optim as optim
# import torch_directml # enable for AMD and Intel gpu
import glob
device = torch.device("cpu") # cuda for CUDA gpu, cpu for cpu, mps for Apple GPU, torch_directml.device() for any other gpu

#%% 2D navier-stokes equations and boundary conditions
def navier_stokes_equation(ns):
    ns.requires_grad_(True) # watch ns for gradient computation
    
    out = model(ns) # output from model
    
    u = out[:, 0] # x velocity
    v = out[:, 1] # y velocity
    p = out[:, 2] # pressure
    
    du = torch.autograd.grad(u, ns, torch.ones_like(u), create_graph = True)[0] # du/dns
    dv = torch.autograd.grad(v, ns, torch.ones_like(v), create_graph = True)[0] # dv/dns
    dp = torch.autograd.grad(p, ns, torch.ones_like(p), create_graph = True)[0] # dp/dns
    
    u_x = du[:, 0] # du/dx
    u_y = du[:, 1] # du/dy
    v_x = dv[:, 0] # dv/dx
    v_y = dv[:, 1] # dv/dy
    p_x = dp[:, 0] # dp/dx
    p_y = dp[:, 1] # dp/dy
    
    u_xx = torch.autograd.grad(u_x, ns, torch.ones_like(u_x), create_graph = True)[0][:, 0] # d2u/dx2
    u_yy = torch.autograd.grad(u_y, ns, torch.ones_like(u_y), create_graph = True)[0][:, 1] # d2u/dy2
    v_xx = torch.autograd.grad(v_x, ns, torch.ones_like(v_x), create_graph = True)[0][:, 0] # d2v/dx2
    v_yy = torch.autograd.grad(v_y, ns, torch.ones_like(v_y), create_graph = True)[0][:, 1] # d2v/dy2
    
    x_mom = (u * u_x) + (v * u_y) + p_x - ((1 / Re) * (u_xx + u_yy)) # x momentum
    y_mom = (u * v_x) + (v * v_y) + p_y - ((1 / Re) * (v_xx + v_yy)) # y momentum
    cont = u_x + v_y # continuity
     
    # apply boundary conditions
    u_left_bc = u[ns[:, 0] == 0] - Uinf # x = 0
    u_right_bc = u_x[ns[:, 0] == L] # x = L
    u_top_bc = u[ns[:, 1] == D] - Uinf # y = D
    u_bottom_bc = u[ns[:, 1] == 0] - Uinf # y = 0

    u_left_bc_box = u[(ns[:, 0] == (square_center_x - length_square / 2)) &\
                      (ns[:, 1] >= (square_center_y - length_square / 2)) &\
                          (ns[:, 1] <= (square_center_y + length_square / 2))] # x = 0 (box)
    u_right_bc_box = u[(ns[:, 0] == (square_center_x + length_square / 2)) &\
                          (ns[:, 1] >= (square_center_y - length_square / 2)) &\
                              (ns[:, 1] <= (square_center_y + length_square / 2))] # x = L (box)
    u_top_bc_box = u[(ns[:, 1] == (square_center_y + length_square / 2)) &\
                     (ns[:, 0] >= (square_center_x - length_square / 2)) &\
                         (ns[:, 0] <= (square_center_x + length_square / 2))] # y = D (box)
    u_bottom_bc_box = u[(ns[:, 1] == (square_center_y - length_square / 2)) &\
                         (ns[:, 0] >= (square_center_x - length_square / 2)) &\
                             (ns[:, 0] <= (square_center_x + length_square / 2))] # y = 0 (box)
         
    v_left_bc = v[ns[:, 0] == 0] # x = 0
    v_right_bc = v_x[ns[:, 0] == L] # x = L
    v_top_bc = v[ns[:, 1] == D] # y = D
    v_bottom_bc = v[ns[:, 1] == 0] # y = 0
    
    v_left_bc_box = v[(ns[:, 0] == (square_center_x - length_square / 2)) &\
                      (ns[:, 1] >= (square_center_y - length_square / 2)) &\
                          (ns[:, 1] <= (square_center_y + length_square / 2))] # x = 0 (box)
    v_right_bc_box = v[(ns[:, 0] == (square_center_x + length_square / 2)) &\
                          (ns[:, 1] >= (square_center_y - length_square / 2)) &\
                              (ns[:, 1] <= (square_center_y + length_square / 2))] # x = L (box)
    v_top_bc_box = v[(ns[:, 1] == (square_center_y + length_square / 2)) &\
                     (ns[:, 0] >= (square_center_x - length_square / 2)) &\
                         (ns[:, 0] <= (square_center_x + length_square / 2))] # y = D (box)
    v_bottom_bc_box = v[(ns[:, 1] == (square_center_y - length_square / 2)) &\
                         (ns[:, 0] >= (square_center_x - length_square / 2)) &\
                             (ns[:, 0] <= (square_center_x + length_square / 2))] # y = 0 (box)
    
    p_left_bc = p_x[ns[:, 0] == 0] # x = 0
    p_right_bc = p[ns[:, 0] == L] # x = L
    p_top_bc = p_y[ns[:, 1] == D] # y = D
    p_bottom_bc = p_y[ns[:, 1] == 0] # y = 0
    
    p_left_bc_box = p_x[(ns[:, 0] == (square_center_x - length_square / 2)) &\
                      (ns[:, 1] >= (square_center_y - length_square / 2)) &\
                          (ns[:, 1] <= (square_center_y + length_square / 2))] # x = 0 (box)
    p_right_bc_box = p_x[(ns[:, 0] == (square_center_x + length_square / 2)) &\
                          (ns[:, 1] >= (square_center_y - length_square / 2)) &\
                              (ns[:, 1] <= (square_center_y + length_square / 2))] # x = L (box)
    p_top_bc_box = p_y[(ns[:, 1] == (square_center_y + length_square / 2)) &\
                     (ns[:, 0] >= (square_center_x - length_square / 2)) &\
                         (ns[:, 0] <= (square_center_x + length_square / 2))] # y = D (box)
    p_bottom_bc_box = p_y[(ns[:, 1] == (square_center_y - length_square / 2)) &\
                         (ns[:, 0] >= (square_center_x - length_square / 2)) &\
                             (ns[:, 0] <= (square_center_x + length_square / 2))] # y = 0 (box)
    return x_mom, y_mom, cont, \
        u_left_bc, u_right_bc, u_top_bc, u_bottom_bc, \
            v_left_bc, v_right_bc, v_top_bc, v_bottom_bc, \
                p_left_bc, p_right_bc, p_top_bc, p_bottom_bc, \
                    u_left_bc_box, u_right_bc_box, u_top_bc_box, u_bottom_bc_box, \
                        v_left_bc_box, v_right_bc_box, v_top_bc_box, v_bottom_bc_box, \
                            p_left_bc_box, p_right_bc_box, p_top_bc_box, p_bottom_bc_box

#%% define loss function
def loss_fn(ns, train_CFD_points, train_CFD_values):
    global x_mom_loss, y_mom_loss, cont_loss, u_bc_loss, v_bc_loss, p_bc_loss, mse_loss, bc_loss
    
    ns = ns.clone().detach().requires_grad_(True).to(device) # move to "device" and use x32 data type
    
    x_mom, y_mom, cont, u_left_bc, u_right_bc, u_top_bc, u_bottom_bc,\
        v_left_bc, v_right_bc, v_top_bc, v_bottom_bc,\
            p_left_bc, p_right_bc, p_top_bc, p_bottom_bc,\
                u_left_bc_box, u_right_bc_box, u_top_bc_box, u_bottom_bc_box,\
                    v_left_bc_box, v_right_bc_box, v_top_bc_box, v_bottom_bc_box,\
                        p_left_bc_box, p_right_bc_box, p_top_bc_box, p_bottom_bc_box = navier_stokes_equation(ns) # compute residuals for Navier-Stokes equations with specified boundary and initial conditions
    
    x_mom_loss = nn.MSELoss()(x_mom, torch.zeros_like(x_mom)) # x momentum loss
    y_mom_loss = nn.MSELoss()(y_mom, torch.zeros_like(y_mom)) # y momentum loss
    cont_loss = nn.MSELoss()(cont, torch.zeros_like(cont)) # continuity loss
    
    u_bc_loss = nn.MSELoss()(u_left_bc, torch.zeros_like(u_left_bc)) + nn.MSELoss()(u_right_bc, torch.zeros_like(u_right_bc)) \
        + nn.MSELoss()(u_top_bc, torch.zeros_like(u_top_bc)) + nn.MSELoss()(u_bottom_bc, torch.zeros_like(u_bottom_bc)) \
            + nn.MSELoss()(u_left_bc_box, torch.zeros_like(u_left_bc_box)) + nn.MSELoss()(u_right_bc_box, torch.zeros_like(u_right_bc_box)) \
                + nn.MSELoss()(u_top_bc_box, torch.zeros_like(u_top_bc_box)) + nn.MSELoss()(u_bottom_bc_box, torch.zeros_like(u_bottom_bc_box)) # u momentum boundary condition loss
    
    v_bc_loss = nn.MSELoss()(v_left_bc, torch.zeros_like(v_left_bc)) + nn.MSELoss()(v_right_bc, torch.zeros_like(v_right_bc)) \
        + nn.MSELoss()(v_top_bc, torch.zeros_like(v_top_bc)) + nn.MSELoss()(v_bottom_bc, torch.zeros_like(v_bottom_bc)) \
            + nn.MSELoss()(v_left_bc_box, torch.zeros_like(v_left_bc_box)) + nn.MSELoss()(v_right_bc_box, torch.zeros_like(v_right_bc_box)) \
                + nn.MSELoss()(v_top_bc_box, torch.zeros_like(v_top_bc_box)) + nn.MSELoss()(v_bottom_bc_box, torch.zeros_like(v_bottom_bc_box)) # v momentum boundary condition loss
    
    p_bc_loss = nn.MSELoss()(p_left_bc, torch.zeros_like(p_left_bc)) + nn.MSELoss()(p_right_bc, torch.zeros_like(p_right_bc)) \
        + nn.MSELoss()(p_top_bc, torch.zeros_like(p_top_bc)) + nn.MSELoss()(p_bottom_bc, torch.zeros_like(p_bottom_bc)) \
            + nn.MSELoss()(p_left_bc_box, torch.zeros_like(p_left_bc_box)) + nn.MSELoss()(p_right_bc_box, torch.zeros_like(p_right_bc_box)) \
                + nn.MSELoss()(p_top_bc_box, torch.zeros_like(p_top_bc_box)) + nn.MSELoss()(p_bottom_bc_box, torch.zeros_like(p_bottom_bc_box)) # continuity boundary condition loss
    
    bc_loss = (u_bc_loss + v_bc_loss + p_bc_loss) / 120 # total boundary condition loss
    mse_loss = (x_mom_loss + y_mom_loss + cont_loss) / 4300 # total equation loss
    
    data_prediction = model(train_CFD_points) # prediction on data points
    data_loss = nn.MSELoss()(data_prediction, train_CFD_values) # data loss
    data_loss = data_loss / 12.44 # data loss
    
    total_loss =  mse_loss + bc_loss + data_loss # total loss
            
    return total_loss

#%% save and resume training
def save_checkpoint(epoch, model, optimizer): # save check point
    checkpoint = {
        'epoch': epoch,
        'model_state_dict': model.state_dict(),
        'optimizer_state_dict': optimizer.state_dict()}
    checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_epoch_{epoch}.pt")
    torch.save(checkpoint, checkpoint_path)
    print(f"Checkpoint saved at epoch {epoch}")

def load_checkpoint(model, optimizer): # load check point
    latest_checkpoint = max(glob.glob(os.path.join(checkpoint_dir, 'checkpoint_epoch_*.pt')), key=os.path.getctime)
    checkpoint = torch.load(latest_checkpoint)
    epoch = checkpoint['epoch']
    model.load_state_dict(checkpoint['model_state_dict'])
    optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
    print(f"Checkpoint loaded from epoch {epoch}")
    return model, optimizer, epoch

#%% show model summary
def count_parameters(model):
    return sum(p1.numel() for p1 in model.parameters() if p1.requires_grad) # show trainable parameters

#%% set same seeds for initialization
np.random.seed(20) # same random initializations every time numpy
rn.seed(20) # same random initializations every time for python 
torch.manual_seed(20) # set seed for reproducibility

#%% generate sample points (grid)
Uinf = 1 # free stream velocity
nu = 0.001 # kinematic viscosity
rho = 1 # density
length_square = 1 # length of square 
L = round(20 * length_square) # Length of plate
D = round(10 * length_square) # Width of plate
Re = round(rho * Uinf * length_square / nu) # Reynolds number
square_center_x = 4
square_center_y = 5

num_training_points = 25 # per length
num_training_points_square = 5 # per meter

# Generate points for boundaries
x_l = np.random.uniform(0, 0, num_training_points) # left boundary
y_l = np.random.uniform(0, D, num_training_points)

x_box_l = np.random.uniform(square_center_x - (length_square / 2), square_center_x - (length_square / 2), num_training_points_square) # left boundary (box)
y_box_l = np.random.uniform(square_center_y - (length_square / 2), square_center_y + (length_square / 2), num_training_points_square)

x_r = np.random.uniform(L, L, num_training_points) # right boundary
y_r = np.random.uniform(0, D, num_training_points)

x_box_r = np.random.uniform(square_center_x + (length_square / 2), square_center_x + (length_square / 2), num_training_points_square) # right boundary (box)
y_box_r = np.random.uniform(square_center_y - (length_square / 2), square_center_y + (length_square / 2), num_training_points_square)

x_b = np.random.uniform(0, L, num_training_points) # bottom boundary
y_b = np.random.uniform(0, 0, num_training_points)

x_box_b = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), num_training_points_square) # bottom boundary (box)
y_box_b = np.random.uniform(square_center_y - (length_square / 2), square_center_y - (length_square / 2), num_training_points_square)

x_t = np.random.uniform(0, L, num_training_points) # top boundary
y_t = np.random.uniform(D, D, num_training_points)

x_box_t = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), num_training_points_square) # top boundary (box)
y_box_t = np.random.uniform(square_center_y + (length_square / 2), square_center_y + (length_square / 2), num_training_points_square)

x_m_l = np.random.uniform(0, square_center_x - (length_square / 2), 750) # points inside domain
y_m_l = np.random.uniform(0, D, 750) # points inside domain

x_m_r = np.random.uniform(square_center_x + (length_square / 2), L, 3350) # points inside domain
y_m_r = np.random.uniform(0, D, 3350) # points inside domain

x_m_t = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), 100) # points inside domain
y_m_t = np.random.uniform(square_center_y + (length_square / 2), D, 100) # points inside domain

x_m_b = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), 100) # points inside domain
y_m_b = np.random.uniform(0, square_center_y - (length_square / 2), 100) # points inside domain

x = np.concatenate([x_l, x_r, x_b, x_t, x_m_l, x_m_r, x_m_t, x_m_b, x_box_l, x_box_r, x_box_b, x_box_t]) # x co-ordinates
y = np.concatenate([y_l, y_r, y_b, y_t, y_m_l, y_m_r, y_m_t, y_m_b, y_box_l, y_box_r, y_box_b, y_box_t]) # y co-ordinates

#%% display mesh
plt.figure(dpi = 300)
plt.scatter(x, y, s = 0.1, c = 'k', alpha = 1) # scatter plot
plt.xlim(0, L)
plt.ylim(0, D)
plt.grid(True)
plt.axis('equal')
plt.grid(False)
plt.show()

#%% create neural network model
h = 50 # number of neurons per hidden layer
model = nn.Sequential(
    nn.Linear(2, h), # 2 inputs for x and y
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, 3)).to(device) # 3 outputs for u, v, p

for m in model.modules():
    if isinstance(m, nn.Linear):
        init.xavier_normal_(m.weight) # weights initialization
        init.constant_(m.bias, 0.0) # bias initialization

print(model) # show model details
num_trainable_params = count_parameters(model) # show trainable parameters
print("Number of trainable parameters:", num_trainable_params)

#%% prepare input data
X_filtered = np.load('X_filtered.npy')
Y_filtered = np.load('Y_filtered.npy')
u_cfd_n = np.load('u_cfd_n.npy')
v_cfd_n = np.load('v_cfd_n.npy')
p_cfd_n = np.load('p_cfd_n.npy')

train_CFD_points = np.vstack((X_filtered, Y_filtered)).T # stack x, y together and convert to tensor
train_CFD_values = np.vstack((u_cfd_n, v_cfd_n, p_cfd_n)).T # stack u, v and p together and convert to tensor

train_CFD_points = torch.tensor(train_CFD_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if moree precision is required
train_CFD_values = torch.tensor(train_CFD_values, dtype=torch.float32).to(device)

train_points = np.vstack((x, y)).T # stack x, y together
train_points = torch.tensor(train_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required

#%% train using Adam
resume_training = False # True if resuming training
start_epoch = 1
optimizer = optim.Adam(model.parameters(), lr = 0.00001) # select optimizer and parameters

checkpoint_dir = './pytorch_checkpoints' # directory to save checkpoints
os.makedirs(checkpoint_dir, exist_ok = True) # create a directory to save checkpoints
if resume_training:
    model, optimizer, start_epoch = load_checkpoint(model, optimizer)
    start_epoch += 1 # start from next epoch
    
for epoch in range(10001):
    optimizer.zero_grad() # clear gradients of all optimized variables
    loss_value = loss_fn(train_points, train_CFD_points, train_CFD_values) # compute prediction by passing inputs to model
    loss_value.backward() # compute gradient of loss with respect to model parameters
    optimizer.step() # perform parameter up date
    if epoch % 100 == 0:
        print(f"{epoch} {loss_value.item()}")

save_checkpoint(epoch, model, optimizer) # save model details
print(f"{epoch} {loss_value.item()}") # show loss values
print("Final Loss Values:")
print("x_mom_loss:", x_mom_loss.item())
print("y_mom_loss:", y_mom_loss.item())
print("cont_loss:", cont_loss.item())
print("u_bc_loss:", u_bc_loss.item())
print("v_bc_loss:", v_bc_loss.item())
print("p_bc_loss:", p_bc_loss.item())
print("mse_loss:", mse_loss.item())
print("bc_loss:", bc_loss.item())

#%% prediction
num_training_points = 2000 # per length
num_training_points_square = 400 # per meter

# Generate points for boundaries
x_l_te = np.random.uniform(0, 0, num_training_points) # left boundary
y_l_te = np.random.uniform(0, D, num_training_points)

x_box_l_te = np.random.uniform(square_center_x - (length_square / 2), square_center_x - (length_square / 2), num_training_points_square) # left boundary
y_box_l_te = np.random.uniform(square_center_y - (length_square / 2), square_center_y + (length_square / 2), num_training_points_square)

x_r_te = np.random.uniform(L, L, num_training_points) # right boundary
y_r_te = np.random.uniform(0, D, num_training_points)

x_box_r_te = np.random.uniform(square_center_x + (length_square / 2), square_center_x + (length_square / 2), num_training_points_square) # right boundary
y_box_r_te = np.random.uniform(square_center_y - (length_square / 2), square_center_y + (length_square / 2), num_training_points_square)

x_b_te = np.random.uniform(0, L, num_training_points) # bottom boundary
y_b_te = np.random.uniform(0, 0, num_training_points)

x_box_b_te = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), num_training_points_square) # bottom boundary
y_box_b_te = np.random.uniform(square_center_y - (length_square / 2), square_center_y - (length_square / 2), num_training_points_square)

x_t_te = np.random.uniform(0, L, num_training_points) # top boundary
y_t_te = np.random.uniform(D, D, num_training_points)

x_box_t_te = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), num_training_points_square) # top boundary
y_box_t_te = np.random.uniform(square_center_y + (length_square / 2), square_center_y + (length_square / 2), num_training_points_square)

x_m_l_te = np.random.uniform(0, square_center_x - (length_square / 2), 40000) # points inside domain
y_m_l_te = np.random.uniform(0, D, 40000) # points inside domain

x_m_r_te = np.random.uniform(square_center_x + (length_square / 2), L, 160000) # points inside domain
y_m_r_te = np.random.uniform(0, D, 160000) # points inside domain

x_m_t_te = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), 5000) # points inside domain
y_m_t_te = np.random.uniform(square_center_y + (length_square / 2), D, 5000) # points inside domain

x_m_b_te = np.random.uniform(square_center_x - (length_square / 2), square_center_x + (length_square / 2), 5000) # points inside domain
y_m_b_te = np.random.uniform(0, square_center_y - (length_square / 2), 5000) # points inside domain

x_te = np.concatenate([x_l_te, x_r_te, x_b_te, x_t_te, x_m_l_te, x_m_r_te, x_m_t_te, x_m_b_te, x_box_l_te, x_box_r_te, x_box_b_te, x_box_t_te]) # x co-ordinates
y_te = np.concatenate([y_l_te, y_r_te, y_b_te, y_t_te, y_m_l_te, y_m_r_te, y_m_t_te, y_m_b_te, y_box_l_te, y_box_r_te, y_box_b_te, y_box_t_te]) # y co-ordinates

test_points = np.vstack((x_te, y_te)).T # stack x, y together
test_points = torch.tensor(test_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required

predict = model(test_points).detach().cpu().numpy() # predict and move back to CPU

u = predict[:, 0] * Uinf # predict and bring back to dimensional form
v = predict[:, 1] * Uinf 
p = predict[:, 2] * Uinf * Uinf * rho
velocity_magnitude = np.sqrt(u**2 + v**2) # calculate velocity magnitude

# plotting
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = u, cmap = 'jet', s = 1, edgecolor = 'none',\
                 alpha = 1, vmin = -1, vmax = 2.75)
# plt.colorbar(orientation='vertical')
plt.axis('equal')
plt.axis('off')
plt.show()

plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = v, cmap = 'jet', s = 1, edgecolor = 'none',\
                 alpha = 1, vmin = -1.4, vmax = 1)
# plt.colorbar(orientation='vertical')
plt.axis('equal')
plt.axis('off')
plt.show()

plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = p, cmap = 'jet', s = 1, edgecolor = 'none',\
                 alpha = 1, vmin = -2.5, vmax = 0.7)
# plt.colorbar(orientation='vertical')
plt.axis('equal')
plt.axis('off')
plt.show()

#%% save PINN output

np.save('X_PINN.npy', x_te)
np.save('Y_PINN.npy', y_te)
np.save('u_PINN.npy', u)
np.save('v_PINN.npy', v)
np.save('p_PINN.npy', p)

Internal Flows

     The case of asymmetric lid-driven cavity at Re 1000 is considered . The case of asymmetric lid-driven cavity has two large counter rotating vortices in the center and two smaller ones in the corners i.e. a complex flow-field. Without training data, as stated above the PINN loss function gets lost in the local minima. The small amount of training data guides the PINN loss function towards global minima. The key is "small amount", to prevent the PINN to learn the training data. The PINN has to be taught solution of the Navier-Stokes equations and not to learn the data from CFD or from wind tunnel! Fig. 3 compares the results for the case of lid-driven cavity while the results from the case of flat plates are compared in Fig. 4. The code is also made available.

Fig. 3, training data from CFD VS PINN prediction


Fig. 4, Flow inside empty room

Code

     Here is code to reproduce Fig. 3, provided training data which can be .csv or any format python reads.

#Copyright <2024> <FAHAD BUTT>
#Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

#%% import necessary libraries
import os
import numpy as np
import random as rn
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.optim as optim
# import torch_directml # enable for AMD and Intel gpu
import glob
device = torch.device("cpu") # cuda for CUDA gpu, cpu for cpu, mps for Apple GPU, torch_directml.device() for any other gpu

#%% 2D navier-stokes equations and boundary conditions
def navier_stokes_equation(ns):
    ns.requires_grad_(True) # watch ns for gradient computation
    
    out = model(ns) # output from model
    
    u = out[:, 0] # x velocity
    v = out[:, 1] # y velocity
    p = out[:, 2] # pressure
    
    du = torch.autograd.grad(u, ns, torch.ones_like(u), create_graph = True)[0] # du/dns
    dv = torch.autograd.grad(v, ns, torch.ones_like(v), create_graph = True)[0] # dv/dns
    dp = torch.autograd.grad(p, ns, torch.ones_like(p), create_graph = True)[0] # dp/dns
    
    u_x = du[:, 0] # du/dx
    u_y = du[:, 1] # du/dy
    v_x = dv[:, 0] # dv/dx
    v_y = dv[:, 1] # dv/dy
    p_x = dp[:, 0] # dp/dx
    p_y = dp[:, 1] # dp/dy
    
    u_xx = torch.autograd.grad(u_x, ns, torch.ones_like(u_x), create_graph = True)[0][:, 0] # d2u/dx2
    u_yy = torch.autograd.grad(u_y, ns, torch.ones_like(u_y), create_graph = True)[0][:, 1] # d2u/dy2
    v_xx = torch.autograd.grad(v_x, ns, torch.ones_like(v_x), create_graph = True)[0][:, 0] # d2v/dx2
    v_yy = torch.autograd.grad(v_y, ns, torch.ones_like(v_y), create_graph = True)[0][:, 1] # d2v/dy2
    
    x_mom = (u * u_x) + (v * u_y) + p_x - ((1 / Re) * (u_xx + u_yy)) # x momentum
    y_mom = (u * v_x) + (v * v_y) + p_y - ((1 / Re) * (v_xx + v_yy)) # y momentum
    cont = u_x + v_y # continuity
     
    # apply boundary conditions
    u_left_bc = u[ns[:, 0] == 0] # x = 0
    u_right_bc = u[ns[:, 0] == L] # x = L
    u_top_bc = u[ns[:, 1] == D] - Uinf # y = D
    u_bottom_bc = u[ns[:, 1] == 0] # y = 0
         
    v_left_bc = v[ns[:, 0] == 0] # x = 0
    v_right_bc = v[ns[:, 0] == L] # x = L
    v_top_bc = v[ns[:, 1] == D] # y = D
    v_bottom_bc = v[ns[:, 1] == 0] # y = 0
    
    p_left_bc = p_x[ns[:, 0] == 0] # x = 0
    p_right_bc = p_x[ns[:, 0] == L] # x = L
    p_top_bc = p[ns[:, 1] == D] # y = D
    p_bottom_bc = p_y[ns[:, 1] == 0] # y = 0
    
    return x_mom, y_mom, cont, \
        u_left_bc, u_right_bc, u_top_bc, u_bottom_bc, \
            v_left_bc, v_right_bc, v_top_bc, v_bottom_bc, \
                p_left_bc, p_right_bc, p_top_bc, p_bottom_bc
                
#%% define loss function
def loss_fn(ns, train_CFD_points, train_CFD_values):
    global x_mom_loss, y_mom_loss, cont_loss, u_bc_loss, v_bc_loss, p_bc_loss, mse_loss, bc_loss, data_loss
    
    ns = ns.clone().detach().requires_grad_(True).to(device) # move to "device" and use x32 data type
    
    x_mom, y_mom, cont, u_left_bc, u_right_bc, u_top_bc, u_bottom_bc,\
        v_left_bc, v_right_bc, v_top_bc, v_bottom_bc,\
            p_left_bc, p_right_bc, p_top_bc, p_bottom_bc,\
                = navier_stokes_equation(ns) # compute residuals for Navier-Stokes equations with specified boundary and initial conditions
    
    x_mom_loss = nn.MSELoss()(x_mom, torch.zeros_like(x_mom)) # x momentum loss
    y_mom_loss = nn.MSELoss()(y_mom, torch.zeros_like(y_mom)) # y momentum loss
    cont_loss = nn.MSELoss()(cont, torch.zeros_like(cont)) # continuity loss
    
    u_bc_loss = nn.MSELoss()(u_left_bc, torch.zeros_like(u_left_bc)) + nn.MSELoss()(u_right_bc, torch.zeros_like(u_right_bc)) \
        + nn.MSELoss()(u_top_bc, torch.zeros_like(u_top_bc)) + nn.MSELoss()(u_bottom_bc, torch.zeros_like(u_bottom_bc))
        
    v_bc_loss = nn.MSELoss()(v_left_bc, torch.zeros_like(v_left_bc)) + nn.MSELoss()(v_right_bc, torch.zeros_like(v_right_bc)) \
        + nn.MSELoss()(v_top_bc, torch.zeros_like(v_top_bc)) + nn.MSELoss()(v_bottom_bc, torch.zeros_like(v_bottom_bc))
        
    p_bc_loss = nn.MSELoss()(p_left_bc, torch.zeros_like(p_left_bc)) + nn.MSELoss()(p_right_bc, torch.zeros_like(p_right_bc)) \
        + nn.MSELoss()(p_top_bc, torch.zeros_like(p_top_bc)) + nn.MSELoss()(p_bottom_bc, torch.zeros_like(p_bottom_bc))
    
    bc_loss = (u_bc_loss + v_bc_loss + p_bc_loss) / 120 # total boundary condition loss
    mse_loss = (x_mom_loss + y_mom_loss + cont_loss) / 4000 # total equation loss
    
    data_prediction = model(train_CFD_points) # prediction on data points
    data_loss = nn.MSELoss()(data_prediction, train_CFD_values) # data loss
    data_loss = data_loss / 20 # data loss
    
    total_loss =  mse_loss + bc_loss + data_loss # total loss
            
    return total_loss

#%% save and resume training
def save_checkpoint(epoch, model, optimizer): # save check point
    checkpoint = {
        'epoch': epoch,
        'model_state_dict': model.state_dict(),
        'optimizer_state_dict': optimizer.state_dict()}
    checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_epoch_{epoch}.pt")
    torch.save(checkpoint, checkpoint_path)
    print(f"Checkpoint saved at epoch {epoch}")

def load_checkpoint(model, optimizer): # load check point
    latest_checkpoint = max(glob.glob(os.path.join(checkpoint_dir, 'checkpoint_epoch_*.pt')), key=os.path.getctime)
    checkpoint = torch.load(latest_checkpoint)
    epoch = checkpoint['epoch']
    model.load_state_dict(checkpoint['model_state_dict'])
    optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
    print(f"Checkpoint loaded from epoch {epoch}")
    return model, optimizer, epoch

#%% show model summary
def count_parameters(model):
    return sum(p1.numel() for p1 in model.parameters() if p1.requires_grad) # show trainable parameters

#%% set same seeds for initialization
np.random.seed(20) # same random initializations every time numpy
rn.seed(20) # same random initializations every time for python 
torch.manual_seed(20) # set seed for reproducibility

#%% generate sample points (grid)
Uinf = 1 # free stream velocity
nu = 1/1000 # kinematic viscosity
rho = 1 # density
length_square = 1 # length of square 
L = round(1 * length_square) # Length of plate
D = round(2 * length_square) # Width of plate
Re = round(rho * Uinf * length_square / nu) # Reynolds number

num_training_points = 30 # per length

# Generate points for boundaries
x_l = np.random.uniform(0, 0, num_training_points) # left boundary
y_l = np.random.uniform(0, D, num_training_points)

x_r = np.random.uniform(L, L, num_training_points) # right boundary
y_r = np.random.uniform(0, D, num_training_points)

x_b = np.random.uniform(0, L, num_training_points) # bottom boundary
y_b = np.random.uniform(0, 0, num_training_points)

x_t = np.random.uniform(0, L, num_training_points) # top boundary
y_t = np.random.uniform(D, D, num_training_points)

x_m = np.random.uniform(0, L, 4000)
y_m = np.random.uniform(0, D, 4000)

x = np.concatenate([x_l, x_r, x_b, x_t, x_m]) # x co-ordinates
y = np.concatenate([y_l, y_r, y_b, y_t, y_m]) # y co-ordinates

#%% display mesh
plt.figure(dpi = 300)
plt.scatter(x, y, s = 0.1, c = 'k', alpha = 1) # scatter plot
plt.xlim(0, L)
plt.ylim(0, D)
plt.grid(True)
plt.axis('equal')
plt.grid(False)
plt.show()

#%% create neural network model
h = 50 # number of neurons per hidden layer
model = nn.Sequential(
    nn.Linear(2, h), # 2 inputs for x and y
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, h),
    nn.Tanh(), # hidden layer
    nn.Linear(h, 3)).to(device) # 3 outputs for u, v, p

for m in model.modules():
    if isinstance(m, nn.Linear):
        init.xavier_normal_(m.weight) # weights initialization
        init.constant_(m.bias, 0.0) # bias initialization

print(model) # show model details
num_trainable_params = count_parameters(model) # show trainable parameters
print("Number of trainable parameters:", num_trainable_params)

#%% prepare input data
X_filtered = np.load('X_filtered.npy')
Y_filtered = np.load('Y_filtered.npy')
u_cfd_n = np.load('u_cfd_n.npy')
v_cfd_n = np.load('v_cfd_n.npy')
p_cfd_n = np.load('p_cfd_n.npy')

train_CFD_points = np.vstack((X_filtered, Y_filtered)).T # stack x, y together and convert to tensor
train_CFD_values = np.vstack((u_cfd_n, v_cfd_n, p_cfd_n)).T # stack u, v and p together and convert to tensor

train_CFD_points = torch.tensor(train_CFD_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if moree precision is required
train_CFD_values = torch.tensor(train_CFD_values, dtype=torch.float32).to(device)

train_points = np.vstack((x, y)).T # stack x, y together
train_points = torch.tensor(train_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required

#%% train using Adam
resume_training = False # True if resuming training
start_epoch = 1
optimizer = optim.Adam(model.parameters(), lr = 0.00001) # select optimizer and parameters

checkpoint_dir = './pytorch_checkpoints' # directory to save checkpoints
os.makedirs(checkpoint_dir, exist_ok = True) # create a directory to save checkpoints
if resume_training:
    model, optimizer, start_epoch = load_checkpoint(model, optimizer)
    start_epoch += 1 # start from next epoch
    
for epoch in range(200001):
    optimizer.zero_grad() # clear gradients of all optimized variables
    loss_value = loss_fn(train_points, train_CFD_points, train_CFD_values) # compute prediction by passing inputs to model
    loss_value.backward() # compute gradient of loss with respect to model parameters
    optimizer.step() # perform parameter up date
    if epoch % 100 == 0:
        print(f"{epoch} {loss_value.item()}")

save_checkpoint(epoch, model, optimizer) # save model details
print(f"{epoch} {loss_value.item()}") # show loss values
print("Final Loss Values:")
print("x_mom_loss:", x_mom_loss.item())
print("y_mom_loss:", y_mom_loss.item())
print("cont_loss:", cont_loss.item())
print("u_bc_loss:", u_bc_loss.item())
print("v_bc_loss:", v_bc_loss.item())
print("p_bc_loss:", p_bc_loss.item())
print("mse_loss:", mse_loss.item())
print("bc_loss:", bc_loss.item())
print("data_loss:", data_loss.item())

#%% prediction
num_training_points = 250 # per length

# Generate points for boundaries
x_l_te = np.random.uniform(0, 0, num_training_points) # left boundary
y_l_te = np.random.uniform(1, D, num_training_points)

x_r_te = np.random.uniform(L, L, num_training_points) # right boundary
y_r_te = np.random.uniform(0, D, num_training_points)

x_b_te = np.random.uniform(1, L, num_training_points) # bottom boundary
y_b_te = np.random.uniform(0, 0, num_training_points)

x_t_te = np.random.uniform(0, L, num_training_points) # top boundary
y_t_te = np.random.uniform(D, D, num_training_points)

x_m_te = np.random.uniform(0, L, 10000)
y_m_te = np.random.uniform(0, D, 10000)

x_te = np.concatenate([x_l_te, x_r_te, x_b_te, x_t_te, x_m_te]) # x co-ordinates
y_te = np.concatenate([y_l_te, y_r_te, y_b_te, y_t_te, y_m_te]) # y co-ordinates

test_points = np.vstack((x_te, y_te)).T # stack x, y together
test_points = torch.tensor(test_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required

predict = model(test_points).detach().cpu().numpy() # predict and move back to CPU

u = predict[:, 0] * Uinf # predict and bring back to dimensional form
v = predict[:, 1] * Uinf 
p = predict[:, 2] * Uinf * Uinf * rho
velocity_magnitude = np.sqrt(u**2 + v**2) # calculate velocity magnitude

# plotting
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = u, cmap = 'jet', s = 1, edgecolor = 'none',\
                  alpha = 1, vmin = -0.3, vmax = 1)
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.show()

plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = v, cmap = 'jet', s = 1, edgecolor = 'none',\
                  alpha = 1, vmin = -0.55, vmax = 0.3)
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.show()

plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = p, cmap = 'jet', s = 1, edgecolor = 'none',\
                  alpha = 1, vmin = -0.1, vmax = 0.5)
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.show()

Jets

     This section is about impinging jets. With very limited data availability, the PINN can also easily predict flow of impinging jets. The Reynolds number is kept at 1000. The results are shown in Fig. 5. Within Fig. 5, top left has PINN results while bottom left has training data points. As mentioned previously, the CFD is performed on highest possible CFL and largest possible mesh size to make the training data as noisy as possible, only just preventing unphysical results. 

Fig. 5, Comparison of PINN with CFD, with very limited training data

     The code is same as for the case of lid-driven cavity, with changes in the left side boundary conditions to simulate the jet. the modified function is presented next.

Code

#Copyright <2024> <FAHAD BUTT>
#Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

#%% 2D navier-stokes equations and boundary conditions
def navier_stokes_equation(ns):
    ns.requires_grad_(True) # watch ns for gradient computation
    
    out = model(ns) # output from model
    
    u = out[:, 0] # x velocity
    v = out[:, 1] # y velocity
    p = out[:, 2] # pressure
    
    du = torch.autograd.grad(u, ns, torch.ones_like(u), create_graph = True)[0] # du/dns
    dv = torch.autograd.grad(v, ns, torch.ones_like(v), create_graph = True)[0] # dv/dns
    dp = torch.autograd.grad(p, ns, torch.ones_like(p), create_graph = True)[0] # dp/dns
    
    u_x = du[:, 0] # du/dx
    u_y = du[:, 1] # du/dy
    v_x = dv[:, 0] # dv/dx
    v_y = dv[:, 1] # dv/dy
    p_x = dp[:, 0] # dp/dx
    p_y = dp[:, 1] # dp/dy
    
    u_xx = torch.autograd.grad(u_x, ns, torch.ones_like(u_x), create_graph = True)[0][:, 0] # d2u/dx2
    u_yy = torch.autograd.grad(u_y, ns, torch.ones_like(u_y), create_graph = True)[0][:, 1] # d2u/dy2
    v_xx = torch.autograd.grad(v_x, ns, torch.ones_like(v_x), create_graph = True)[0][:, 0] # d2v/dx2
    v_yy = torch.autograd.grad(v_y, ns, torch.ones_like(v_y), create_graph = True)[0][:, 1] # d2v/dy2
    
    x_mom = (u * u_x) + (v * u_y) + p_x - ((1 / Re) * (u_xx + u_yy)) # x momentum
    y_mom = (u * v_x) + (v * v_y) + p_y - ((1 / Re) * (v_xx + v_yy)) # y momentum
    cont = u_x + v_y # continuity
     
    # apply boundary conditions
    u_left_bc_0 = u[(ns[:, 0] == 0) & (ns[:, 1] >= 0) & (ns[:, 1] < 0.875)]  # u = 0 from y = 0 to y = 0.875
    u_left_bc_1 = u[(ns[:, 0] == 0) & (ns[:, 1] >= 0.875) & (ns[:, 1] < 1.125)] - Uinf  # u = Uinf from y = 0.875 to y = 1.125
    u_left_bc_2 = u[(ns[:, 0] == 0) & (ns[:, 1] >= 1.125) & (ns[:, 1] <= D)]  # u = 0 from y = 1.125 to y = 2
    u_left_bc = torch.cat([u_left_bc_0, u_left_bc_1, u_left_bc_2]) # x = 0
    u_right_bc = u[ns[:, 0] == L] # x = L
    u_top_bc = u_y[ns[:, 1] == D] # y = D
    u_bottom_bc = u_y[ns[:, 1] == 0] # y = 0
         
    v_left_bc = v[ns[:, 0] == 0] # x = 0
    v_right_bc = v[ns[:, 0] == L] # x = L
    v_top_bc = v_y[ns[:, 1] == D] # y = D
    v_bottom_bc = v_y[ns[:, 1] == 0] # y = 0
    
    p_left_bc = p_x[ns[:, 0] == 0] # x = 0
    p_right_bc = p_x[ns[:, 0] == L] # x = L
    p_top_bc = p[ns[:, 1] == D] # y = D
    p_bottom_bc = p[ns[:, 1] == 0] # y = 0
    
    return x_mom, y_mom, cont, \
        u_left_bc, u_right_bc, u_top_bc, u_bottom_bc, \
            v_left_bc, v_right_bc, v_top_bc, v_bottom_bc, \
                p_left_bc, p_right_bc, p_top_bc, p_bottom_bc

     Of course, this all has been done before ⌛, but I present a simple yet effective free πŸ€‘ code πŸ’» for the readers to explore and improve upon. Please cite ㉖ this blog post if you use the code in your research! Good Luck!

No comments:

Post a Comment