This post is about solving linear ⏐ elasticity 〰️ problems using PINNs. As you can guess from the title, after successfully teaching neural 🧠 networks to solve the unsolvable i.e. the Navier-Stokes 🌬 equations, yours truly has now taught 🧑🏫 linear elasticity to the neural networks 🖧 as well. The results of a simple case under plane-strain assumption as defined in Fig. 1 are presented in Fig. 2. The results are in excellent agreement 🤝 with those obtained from a "shall not be named" commercial 💸 software.
The equations of motion are in the form of Young's modulus and Poisson's ratio. The material properties are that of mild-steel. Of course the equations are scaled and non-dimensionalized as the magnitude difference between Young's modulus and Poisson's ratio for exemple is way too large and this becomes a problem while trying to minimize 📉 the custom loss 🗠 function. The equations of motion are in strong form and do not require a mesh.
Cite as: Fahad Butt (2024). LE-PINN (https://whenrobotslove.blogspot.com/2024/08/teaching-linear-elasticity-to-sand-data.html), Blogger. Retrieved Month Date, Year
The coloring scale for each photo is same and are mentioned within the code 01!
The code 🖥 has now the capability to solve problems of symmetric tension or compression. For example a flat plate being compressed or put in tension without no constraints. Code 02 is mentioned below. The results from Code 02 are shown in Fig. 4 while the boundary conditions are shown in Fig. 3. The colocation points are shown in Fig. 5. Within Fig.4, blue means low and red means high values. Scales are mention in the caption. Fig. 6 shows unloaded and loaded (red) plates. It can be clearly seen that the plate gets compressed in horizontal axis and expands in the vertical axis, as expected in this set of loading.
The code is now capable of predicting stresses in plates with one end fixed and one end loaded in tension or compression. Of course! There is a hole 🕳 in the center 😆. The code is similar to Code 03, and will not be shared here. The results and boundary conditions are shown in Fig. 8.
Code 01
#Copyright <2024> <FAHAD BUTT>
#Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#%% import necessary libraries
import os
import numpy as np
import random as rn
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.optim as optim
# import torch_directml # enable for AMD and Intel gpu
import glob
device = torch.device("cpu") # cuda for CUDA gpu, cpu for cpu, mps for Apple GPU, torch_directml.device() for any other gpu
#%% 2D linear elasticity equations and boundary conditions
def linear_elasticity_equation(lee):
lee.requires_grad_(True) # watch lee for gradient computation
out = model(lee) # output from model
u = out[:, 0] # x displacement
v = out[:, 1] # y displacement
sxx = out[:, 2] # normal stress x
syy = out[:, 3] # normal stress y
sxy = out[:, 4] # shear stress xy
exx = out[:, 5] # normal strain x
eyy = out[:, 6] # normal strain y
exy = out[:, 7] # shear strain xy
du = torch.autograd.grad(u, lee, torch.ones_like(u), create_graph = True)[0] # du/dlee
dv = torch.autograd.grad(v, lee, torch.ones_like(v), create_graph = True)[0] # dv/dlee
dsxx = torch.autograd.grad(sxx, lee, torch.ones_like(sxx), create_graph = True)[0] # dsxx/dlee
dsyy = torch.autograd.grad(syy, lee, torch.ones_like(syy), create_graph = True)[0] # dsyy/dlee
dsxy = torch.autograd.grad(sxy, lee, torch.ones_like(sxy), create_graph = True)[0] # dsxy/dlee
dexx = torch.autograd.grad(exx, lee, torch.ones_like(exx), create_graph = True)[0] # dexx/dlee
deyy = torch.autograd.grad(eyy, lee, torch.ones_like(eyy), create_graph = True)[0] # deyy/dlee
dexy = torch.autograd.grad(exy, lee, torch.ones_like(exy), create_graph = True)[0] # dexy/dlee
u_x = du[:, 0] # du/dx
u_y = du[:, 1] # du/dy
v_x = dv[:, 0] # dv/dx
v_y = dv[:, 1] # dv/dy
sxx_x = dsxx[:, 0] # dsxx/dx
syy_y = dsyy[:, 1] # dsyy/dy
sxy_x = dsxy[:, 0] # dtxy/dx
sxy_y = dsxy[:, 1] # dtxy/dy
exx_y = dexx[:, 1] # dexx/dy
eyy_x = deyy[:, 0] # deyy/dx
exy_x = dexy[:, 0] # dexy/dx
x_mom = (D * sxx_x) + (L * sxy_y) # x momentum
y_mom = (D * sxy_x) + (L * syy_y) # y momentum
x_kin = exx - u_x # kinematic x
y_kin = eyy - v_y # kinematic y
xy_kin = exy - (0.5 * (u_y + v_x)) # kinematic xy
ps_xx = sxx - (alpha * (((1 - v_ps) * exx) + (v_ps * eyy))) # plane strain xx
ps_yy = syy - (alpha * (((1 - v_ps) * eyy) + (v_ps * exx))) # plane strain yy
ps_xy = sxy - (beta * exy) # plane strain xy
exx_yy = torch.autograd.grad(exx_y, lee, torch.ones_like(exx_y), create_graph=True)[0][:, 1] # d2exx/dy2
eyy_xx = torch.autograd.grad(eyy_x, lee, torch.ones_like(eyy_x), create_graph=True)[0][:, 0] # d2eyy/dx2
exy_xy = torch.autograd.grad(exy_x, lee, torch.ones_like(exy_x), create_graph=True)[0][:, 1] # d2exy/dxdy
comp = ((L / D) * exx_yy) - (2 * exy_xy) + ((D / L) * eyy_xx) # compatibility condition
# apply boundary conditions
u_left_bc = u[lee[:, 0] == 0] # x = 0
v_left_bc = v[lee[:, 0] == 0] # x = 0
sxy_left_bc = sxy[lee[:, 0] == 0] # x = 0
sxy_right_bc = sxy[lee[:, 0] == L] # x = L
sxy_bottom_bc = sxy[lee[:, 1] == 0] # y = 0
sxy_top_bc = sxy[lee[:, 1] == D] # y = D
syy_bottom_bc = syy[lee[:, 1] == 0] # y = 0
syy_top_bc = syy[lee[:, 1] == D] # y = D
sxx_right_bc = sxx[lee[:, 0] == L] - 1 # x = L
return x_mom, y_mom, x_kin, y_kin, xy_kin, ps_xx, ps_yy, ps_xy, comp, \
u_left_bc, v_left_bc, \
sxy_left_bc, sxy_right_bc, sxy_bottom_bc, sxy_top_bc, \
syy_bottom_bc, syy_top_bc, \
sxx_right_bc
#%% define loss function
def loss_fn(lee):
lee = lee.clone().detach().requires_grad_(True).to(device) # move to "device" and use x32 data type
x_mom, y_mom, x_kin, y_kin, xy_kin, ps_xx, ps_yy, ps_xy, comp, \
u_left_bc, v_left_bc, \
sxy_left_bc, sxy_right_bc, sxy_bottom_bc, sxy_top_bc, \
syy_bottom_bc, syy_top_bc, \
sxx_right_bc = linear_elasticity_equation(lee) # compute residuals for equations with specified boundary conditions
x_mom_loss = nn.MSELoss()(x_mom, torch.zeros_like(x_mom)) # x momentum loss
y_mom_loss = nn.MSELoss()(y_mom, torch.zeros_like(y_mom)) # y momentum loss
x_kin_loss = nn.MSELoss()(x_kin, torch.zeros_like(x_kin)) # kinematic equation loss x
y_kin_loss = nn.MSELoss()(y_kin, torch.zeros_like(y_kin)) # kinematic equation loss y
xy_kin_loss = nn.MSELoss()(xy_kin, torch.zeros_like(xy_kin)) # kinematic equation loss xy
ps_xx_loss = nn.MSELoss()(ps_xx, torch.zeros_like(ps_xx)) # plane strain loss xx
ps_yy_loss = nn.MSELoss()(ps_yy, torch.zeros_like(ps_yy)) # plane strain loss yy
ps_xy_loss = nn.MSELoss()(ps_xy, torch.zeros_like(ps_xy)) # plane strain loss xy
comp_loss = nn.MSELoss()(comp, torch.zeros_like(comp)) # compatibility loss
u_bc_loss = nn.MSELoss()(u_left_bc, torch.zeros_like(u_left_bc)) # u boundary condition loss
v_bc_loss = nn.MSELoss()(v_left_bc, torch.zeros_like(v_left_bc)) # v boundary condition loss
sxx_bc_loss = nn.MSELoss()(sxx_right_bc, torch.zeros_like(sxx_right_bc)) # sxx boundary condition loss
sxy_bc_loss = nn.MSELoss()(sxy_left_bc, torch.zeros_like(sxy_left_bc)) + nn.MSELoss()(sxy_right_bc, torch.zeros_like(sxy_right_bc)) \
+ nn.MSELoss()(sxy_bottom_bc, torch.zeros_like(sxy_bottom_bc)) + nn.MSELoss()(sxy_top_bc, torch.zeros_like(sxy_top_bc)) # sxy boundary condition loss
syy_bc_loss = nn.MSELoss()(syy_bottom_bc, torch.zeros_like(syy_bottom_bc)) + nn.MSELoss()(syy_top_bc, torch.zeros_like(syy_top_bc)) # syy boundary condition loss
bc_loss = (u_bc_loss + v_bc_loss + sxx_bc_loss + sxy_bc_loss + syy_bc_loss) / 30 # total boundary condition loss
mse_loss = (x_mom_loss + y_mom_loss + \
+ x_kin_loss + y_kin_loss + xy_kin_loss + \
ps_xx_loss + ps_yy_loss + ps_xy_loss + comp_loss) / 100 # total equation loss
total_loss = mse_loss + bc_loss # total loss
return total_loss
#%% save and resume training
def save_checkpoint(epoch, model, optimizer): # save check point
checkpoint = {
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()}
checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_epoch_{epoch}.pt")
torch.save(checkpoint, checkpoint_path)
print(f"Checkpoint saved at epoch {epoch}")
def load_checkpoint(model, optimizer): # load check point
latest_checkpoint = max(glob.glob(os.path.join(checkpoint_dir, 'checkpoint_epoch_*.pt')), key=os.path.getctime)
checkpoint = torch.load(latest_checkpoint)
epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(f"Checkpoint loaded from epoch {epoch}")
return model, optimizer, epoch
#%% show model summary
def count_parameters(model):
return sum(p1.numel() for p1 in model.parameters() if p1.requires_grad) # show trainable parameters
#%% set same seeds for initialization
np.random.seed(20) # same random initializations every time numpy
rn.seed(20) # same random initializations every time for python
torch.manual_seed(20) # set seed for reproducibility
#%% generate sample points (grid)
s_0 = 1e6 # applied load [Pa]
E = 2e11 # Young's modulus [Pa]
v_ps = 0.3 # Poisson's ratio
L = 1 # Length of plate [m]
D = 0.5 # Width of plate [m]
E_0 = E / s_0 # non dimensional Young's modulus
U = s_0 * L / E # scaled u displacement
V = s_0 * D / E # scaled v displacement
e_0 = s_0 / E # scaled strain
alpha = 1 / ((1 + v_ps) * (1 - (2 * v_ps))) # non dimensional parameter
beta = 1 / (2 * (1 + v_ps)) # non dimensional parameter
num_training_points = 10 # per length
# Generate points for boundaries
x_l = np.random.uniform(0, 0, round(num_training_points * D)) # left boundary
y_l = np.random.uniform(0, D, round(num_training_points * D))
x_r = np.random.uniform(L, L, round(num_training_points * D)) # right boundary
y_r = np.random.uniform(0, D, round(num_training_points * D))
x_b = np.random.uniform(0, L, round(num_training_points * L)) # bottom boundary
y_b = np.random.uniform(0, 0, round(num_training_points * L))
x_t = np.random.uniform(0, L, round(num_training_points * L)) # top boundary
y_t = np.random.uniform(D, D, round(num_training_points * L))
x_m = np.random.uniform(0, L, 200)
y_m = np.random.uniform(0, D, 200)
x = np.concatenate([x_l, x_r, x_b, x_t, x_m]) # x co-ordinates
y = np.concatenate([y_l, y_r, y_b, y_t, y_m]) # y co-ordinates
#%% display mesh
plt.figure(dpi = 300)
plt.scatter(x, y, s = 0.1, c = 'k', alpha = 1) # scatter plot
plt.xlim(0, L)
plt.ylim(0, D)
plt.grid(True)
plt.axis('equal')
plt.grid(False)
plt.show()
#%% create neural network model
h = 50 # number of neurons per hidden layer
model = nn.Sequential(
nn.Linear(2, h), # 2 inputs for x and y coordinates
nn.Tanh(), # hidden layer
nn.Linear(h, h),
nn.Tanh(), # hidden layer
nn.Linear(h, 8)).to(device) # 8 outputs for 3 stresses, 3 strains, 2 displacements
for m in model.modules():
if isinstance(m, nn.Linear):
init.xavier_normal_(m.weight) # weights initialization
init.constant_(m.bias, 0.0) # bias initialization
print(model) # show model details
num_trainable_params = count_parameters(model) # show trainable parameters
print("Number of trainable parameters:", num_trainable_params)
#%% prepare input data
train_points = np.vstack((x, y)).T # stack x, y together
train_points = torch.tensor(train_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required
#%% train using Adam
resume_training = False # True if resuming training
start_epoch = 1
optimizer = optim.Adam(model.parameters(), lr = 0.00001) # select optimizer and parameters
checkpoint_dir = './pytorch_checkpoints' # directory to save checkpoints
os.makedirs(checkpoint_dir, exist_ok = True) # create a directory to save checkpoints
if resume_training:
model, optimizer, start_epoch = load_checkpoint(model, optimizer)
start_epoch += 1 # start from next epoch
for epoch in range(200001):
optimizer.zero_grad() # clear gradients of all optimized variables
loss_value = loss_fn(train_points) # compute prediction by passing inputs to model
loss_value.backward() # compute gradient of loss with respect to model parameters
optimizer.step() # perform parameter up date
if epoch % 100 == 0:
print(f"{epoch} {loss_value.item()}")
save_checkpoint(epoch, model, optimizer) # save model details
#%% prediction
num_testing_points = 250 # per length
# Generate points for boundaries
x_l_te = np.random.uniform(0, 0, num_testing_points) # left boundary
y_l_te = np.random.uniform(0, D, num_testing_points)
x_r_te = np.random.uniform(L, L, num_testing_points) # right boundary
y_r_te = np.random.uniform(0, D, num_testing_points)
x_b_te = np.random.uniform(0, L, num_testing_points) # bottom boundary
y_b_te = np.random.uniform(0, 0, num_training_points)
x_t_te = np.random.uniform(0, L, num_testing_points) # top boundary
y_t_te = np.random.uniform(D, D, num_testing_points)
x_m_te = np.random.uniform(0, L, 10000)
y_m_te = np.random.uniform(0, D, 10000)
x_te = np.concatenate([x_l_te, x_r_te, x_b_te, x_t_te, x_m_te]) # x co-ordinates
y_te = np.concatenate([y_l_te, y_r_te, y_b_te, y_t_te, y_m_te]) # y co-ordinates
test_points = np.vstack((x_te, y_te)).T # stack x, y together
test_points = torch.tensor(test_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required
predict = model(test_points).detach().cpu().numpy() # predict and move back to CPU
u = predict[:, 0]
v = predict[:, 1]
sxx = predict[:, 2]
syy = predict[:, 3]
sxy = predict[:, 4]
exx = predict[:, 5]
eyy = predict[:, 6]
exy = predict[:, 7]
# plotting
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = u * U * 1000, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = 0, vmax = 0.0045)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("u")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = v * V * 1000, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = -0.0004, vmax = 0.0004)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("v")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = sxx * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = 0.9, vmax = 1.1)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("sxx")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = syy * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = 0, vmax = 0.4)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("syy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = sxy * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = -0.4, vmax = 0.4)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("txy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = exx * e_0, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = 3.5e-6, vmax = 5e-6)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("exx")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = eyy * e_0, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = -2.7e-6, vmax = 0)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("eyy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = exy * e_0, cmap = 'jet', s = 1, edgecolor = 'none',\
alpha = 1, vmin = -1e-6, vmax = 1e-6)
plt.colorbar(orientation = 'vertical')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("gxy")
plt.show()
Code 02
# Copyright <2024> <FAHAD BUTT>
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#%% import necessary libraries
import os
import numpy as np
import random as rn
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.optim as optim
# import torch_directml # enable for AMD and Intel gpu
import glob
device = torch.device("cpu") # cuda for CUDA gpu, cpu for cpu, mps for Apple GPU, torch_directml.device() for any other gpu
#%% 2D linear elasticity equations and boundary conditions
def linear_elasticity_equation(lee):
lee.requires_grad_(True) # watch lee for gradient computation
out = model(lee) # output from model
u = out[:, 0] # x displacement
v = out[:, 1] # y displacement
sxx = out[:, 2] # normal stress x
syy = out[:, 3] # normal stress y
sxy = out[:, 4] # shear stress xy
exx = out[:, 5] # normal strain x
eyy = out[:, 6] # normal strain y
exy = out[:, 7] # shear strain xy
du = torch.autograd.grad(u, lee, torch.ones_like(u), create_graph = True)[0] # du/dlee
dv = torch.autograd.grad(v, lee, torch.ones_like(v), create_graph = True)[0] # dv/dlee
dsxx = torch.autograd.grad(sxx, lee, torch.ones_like(sxx), create_graph = True)[0] # dsxx/dlee
dsyy = torch.autograd.grad(syy, lee, torch.ones_like(syy), create_graph = True)[0] # dsyy/dlee
dsxy = torch.autograd.grad(sxy, lee, torch.ones_like(sxy), create_graph = True)[0] # dsxy/dlee
dexx = torch.autograd.grad(exx, lee, torch.ones_like(exx), create_graph = True)[0] # dexx/dlee
deyy = torch.autograd.grad(eyy, lee, torch.ones_like(eyy), create_graph = True)[0] # deyy/dlee
dexy = torch.autograd.grad(exy, lee, torch.ones_like(exy), create_graph = True)[0] # dexy/dlee
u_x = du[:, 0] # du/dx
u_y = du[:, 1] # du/dy
v_x = dv[:, 0] # dv/dx
v_y = dv[:, 1] # dv/dy
sxx_x = dsxx[:, 0] # dsxx/dx
syy_y = dsyy[:, 1] # dsyy/dy
sxy_x = dsxy[:, 0] # dtxy/dx
sxy_y = dsxy[:, 1] # dtxy/dy
exx_y = dexx[:, 1] # dexx/dy
eyy_x = deyy[:, 0] # deyy/dx
exy_x = dexy[:, 0] # dexy/dx
x_mom = (D * sxx_x) + (L * sxy_y) # x momentum
y_mom = (D * sxy_x) + (L * syy_y) # y momentum
x_kin = exx - u_x # kinematic x
y_kin = eyy - v_y # kinematic y
xy_kin = exy - (0.5 * (u_y + v_x)) # kinematic xy
ps_xx = sxx - (alpha * (((1 - v_ps) * exx) + (v_ps * eyy))) # plane strain xx
ps_yy = syy - (alpha * (((1 - v_ps) * eyy) + (v_ps * exx))) # plane strain yy
ps_xy = sxy - (beta * exy) # plane strain xy
exx_yy = torch.autograd.grad(exx_y, lee, torch.ones_like(exx_y), create_graph=True)[0][:, 1] # d2exx/dy2
eyy_xx = torch.autograd.grad(eyy_x, lee, torch.ones_like(eyy_x), create_graph=True)[0][:, 0] # d2eyy/dx2
exy_xy = torch.autograd.grad(exy_x, lee, torch.ones_like(exy_x), create_graph=True)[0][:, 1] # d2exy/dxdy
comp = ((L / D) * exx_yy) - (2 * exy_xy) + ((D / L) * eyy_xx) # compatibility condition
# apply boundary conditions
u_center = u[(lee[:, 0] == L / 2) & (lee[:, 1] == D / 2)] # x = L / 2, y = D / 2
v_center = v[(lee[:, 0] == L / 2) & (lee[:, 1] == D / 2)] # x = L / 2, y = D / 2
sxy_left_bc = sxy[lee[:, 0] == 0] # x = L
sxy_right_bc = sxy[lee[:, 0] == L] # x = L
sxy_bottom_bc = sxy[lee[:, 1] == 0] # y = 0
sxy_top_bc = sxy[lee[:, 1] == D] # y = 0
syy_bottom_bc = syy[lee[:, 1] == 0] # y = 0
syy_top_bc = syy[lee[:, 1] == D] # y = D
sxx_right_bc = sxx[lee[:, 0] == L] + 1 # x = L
sxx_left_bc = sxx[lee[:, 0] == 0] + 1 # x = L
return x_mom, y_mom, x_kin, y_kin, xy_kin, ps_xx, ps_yy, ps_xy, comp, \
sxy_left_bc, sxy_right_bc, sxy_bottom_bc, sxy_top_bc, \
syy_bottom_bc, syy_top_bc, \
sxx_right_bc, sxx_left_bc, \
u_center, v_center
#%% define loss function
def loss_fn(lee):
global u_bc_loss, v_bc_loss, sxx_bc_loss, sxy_bc_loss, syy_bc_loss
lee = lee.clone().detach().requires_grad_(True).to(device) # move to "device" and use x32 data type
x_mom, y_mom, x_kin, y_kin, xy_kin, ps_xx, ps_yy, ps_xy, comp, \
sxy_left_bc, sxy_right_bc, sxy_bottom_bc, sxy_top_bc, \
syy_bottom_bc, syy_top_bc, \
sxx_right_bc, sxx_left_bc, \
u_center, v_center \
= linear_elasticity_equation(lee) # compute residuals for equations with specified boundary conditions
x_mom_loss = nn.MSELoss()(x_mom, torch.zeros_like(x_mom)) # x momentum loss
y_mom_loss = nn.MSELoss()(y_mom, torch.zeros_like(y_mom)) # y momentum loss
x_kin_loss = nn.MSELoss()(x_kin, torch.zeros_like(x_kin)) # kinematic equation loss x
y_kin_loss = nn.MSELoss()(y_kin, torch.zeros_like(y_kin)) # kinematic equation loss y
xy_kin_loss = nn.MSELoss()(xy_kin, torch.zeros_like(xy_kin)) # kinematic equation loss xy
ps_xx_loss = nn.MSELoss()(ps_xx, torch.zeros_like(ps_xx)) # plane strain loss xx
ps_yy_loss = nn.MSELoss()(ps_yy, torch.zeros_like(ps_yy)) # plane strain loss yy
ps_xy_loss = nn.MSELoss()(ps_xy, torch.zeros_like(ps_xy)) # plane strain loss xy
comp_loss = nn.MSELoss()(comp, torch.zeros_like(comp)) # compatibility loss
u_bc_loss = nn.MSELoss()(u_center, torch.zeros_like(u_center)) # u boundary condition loss
v_bc_loss = nn.MSELoss()(v_center, torch.zeros_like(v_center)) # v boundary condition loss
sxx_bc_loss = nn.MSELoss()(sxx_left_bc, torch.zeros_like(sxx_left_bc)) + nn.MSELoss()(sxx_right_bc, torch.zeros_like(sxx_right_bc)) # sxx boundary condition loss
sxy_bc_loss = nn.MSELoss()(sxy_left_bc, torch.zeros_like(sxy_left_bc)) + nn.MSELoss()(sxy_right_bc, torch.zeros_like(sxy_right_bc)) \
+ nn.MSELoss()(sxy_bottom_bc, torch.zeros_like(sxy_bottom_bc)) + nn.MSELoss()(sxy_top_bc, torch.zeros_like(sxy_top_bc)) # sxy boundary condition loss
syy_bc_loss = nn.MSELoss()(syy_bottom_bc, torch.zeros_like(syy_bottom_bc)) + nn.MSELoss()(syy_top_bc, torch.zeros_like(syy_top_bc)) # syy boundary condition loss
bc_loss = (u_bc_loss + v_bc_loss + sxx_bc_loss + sxy_bc_loss + syy_bc_loss) / 60 # total boundary condition loss
mse_loss = (x_mom_loss + y_mom_loss + \
+ x_kin_loss + y_kin_loss + xy_kin_loss + \
ps_xx_loss + ps_yy_loss + ps_xy_loss + comp_loss) / 401 # total equation loss
total_loss = mse_loss + bc_loss # total loss
return total_loss
#%% save and resume training
def save_checkpoint(epoch, model, optimizer): # save check point
checkpoint = {
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()}
checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_epoch_{epoch}.pt")
torch.save(checkpoint, checkpoint_path)
print(f"Checkpoint saved at epoch {epoch}")
def load_checkpoint(model, optimizer): # load check point
latest_checkpoint = max(glob.glob(os.path.join(checkpoint_dir, 'checkpoint_epoch_*.pt')), key=os.path.getctime)
checkpoint = torch.load(latest_checkpoint)
epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(f"Checkpoint loaded from epoch {epoch}")
return model, optimizer, epoch
#%% show model summary
def count_parameters(model):
return sum(p1.numel() for p1 in model.parameters() if p1.requires_grad) # show trainable parameters
#%% set same seeds for initialization
np.random.seed(20) # same random initializations every time numpy
rn.seed(20) # same random initializations every time for python
torch.manual_seed(20) # set seed for reproducibility
#%% generate sample points (grid)
s_0 = 1e6 # applied load [Pa]
E = 2e11 # Young's modulus [Pa]
v_ps = 0.3 # Poisson's ratio
L = 1 # Length of plate [m]
D = 0.5 # Width of plate [m]
E_0 = E / s_0 # non dimensional Young's modulus
U = s_0 * L / E # scaled u displacement
V = s_0 * D / E # scaled v displacement
e_0 = s_0 / E # scaled strain
alpha = 1 / ((1 + v_ps) * (1 - (2 * v_ps))) # non dimensional parameter
beta = 1 / (2 * (1 + v_ps)) # non dimensional parameter
num_training_points = 20 # per length
# Generate points for boundaries
x_l = np.random.uniform(0, 0, round(num_training_points * D)) # left boundary
y_l = np.random.uniform(0, D, round(num_training_points * D))
x_r = np.random.uniform(L, L, round(num_training_points * D)) # right boundary
y_r = np.random.uniform(0, D, round(num_training_points * D))
x_b = np.random.uniform(0, L, round(num_training_points * L)) # bottom boundary
y_b = np.random.uniform(0, 0, round(num_training_points * L))
x_t = np.random.uniform(0, L, round(num_training_points * L)) # top boundary
y_t = np.random.uniform(D, D, round(num_training_points * L))
x_m = np.random.uniform(0, L, 400) # interior points
y_m = np.random.uniform(0, D, 400)
x_center = L / 2 # center point
y_center = D / 2
x = np.concatenate([x_l, x_r, x_b, x_t, x_m, [x_center]]) # x co-ordinates
y = np.concatenate([y_l, y_r, y_b, y_t, y_m, [y_center]]) # y co-ordinates
#%% display mesh
plt.figure(dpi = 300)
plt.scatter(x, y, s = 1, c = 'k', alpha = 1) # scatter plot
plt.xlim(0, L)
plt.ylim(0, D)
plt.grid(True)
plt.axis('equal')
plt.grid(False)
plt.show()
#%% create neural network model
h = 50 # number of neurons per hidden layer
model = nn.Sequential(
nn.Linear(2, h), # 2 inputs for x and y
nn.Tanh(), # hidden layer
nn.Linear(h, h),
nn.Tanh(), # hidden layer
nn.Linear(h, 8)).to(device) # 8 outputs for 3 stresses, 3 strains, 2 displacements
for m in model.modules():
if isinstance(m, nn.Linear):
init.xavier_normal_(m.weight) # weights initialization
init.constant_(m.bias, 0.0) # bias initialization
print(model) # show model details
num_trainable_params = count_parameters(model) # show trainable parameters
print("Number of trainable parameters:", num_trainable_params)
#%% prepare input data
train_points = np.vstack((x, y)).T # stack x, y together
train_points = torch.tensor(train_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required
#%% train using Adam
resume_training = False # True if resuming training
start_epoch = 1
optimizer = optim.Adam(model.parameters(), lr = 0.000001) # select optimizer and parameters
checkpoint_dir = './pytorch_checkpoints' # directory to save checkpoints
os.makedirs(checkpoint_dir, exist_ok = True) # create a directory to save checkpoints
if resume_training:
model, optimizer, start_epoch = load_checkpoint(model, optimizer)
start_epoch += 1 # start from next epoch
for epoch in range(100001):
optimizer.zero_grad() # clear gradients of all optimized variables
loss_value = loss_fn(train_points) # compute prediction by passing inputs to model
loss_value.backward() # compute gradient of loss with respect to model parameters
optimizer.step() # perform parameter up date
if epoch % 100 == 0:
print(f"{epoch} {loss_value.item()}")
save_checkpoint(epoch, model, optimizer) # save model details
print(f"{epoch} {loss_value.item()}") # show loss values
print("Final Loss Values:")
print("u_bc_loss:", u_bc_loss.item())
print("v_bc_loss:", v_bc_loss.item())
print("sxx_bc_loss:", sxx_bc_loss.item())
print("sxy_bc_loss:", sxy_bc_loss.item())
print("syy_bc_loss:", syy_bc_loss.item())
#%% prediction
num_training_points = 800 # per length
# Generate points for boundaries
x_l_te = np.random.uniform(0, 0, round(num_training_points * D)) # left boundary
y_l_te = np.random.uniform(0, D, round(num_training_points * D))
x_r_te = np.random.uniform(L, L, round(num_training_points * D)) # right boundary
y_r_te = np.random.uniform(0, D, round(num_training_points * D))
x_b_te = np.random.uniform(0, L, round(num_training_points * L)) # bottom boundary
y_b_te = np.random.uniform(0, 0, round(num_training_points * L))
x_t_te = np.random.uniform(0, L, round(num_training_points * L)) # top boundary
y_t_te = np.random.uniform(D, D, round(num_training_points * L))
x_m_te = np.random.uniform(0, L, 80000)
y_m_te = np.random.uniform(0, D, 80000)
x_te = np.concatenate([x_l_te, x_r_te, x_b_te, x_t_te, x_m_te]) # x co-ordinates
y_te = np.concatenate([y_l_te, y_r_te, y_b_te, y_t_te, y_m_te]) # y co-ordinates
test_points = np.vstack((x_te, y_te)).T # stack x, y together
test_points = torch.tensor(test_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required
predict = model(test_points).detach().cpu().numpy() # predict and move back to CPU
u = predict[:, 0]
v = predict[:, 1]
sxx = predict[:, 2]
syy = predict[:, 3]
sxy = predict[:, 4]
exx = predict[:, 5]
eyy = predict[:, 6]
exy = predict[:, 7]
#%% plotting
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = u * U * 1000, cmap = 'jet', s = 1, edgecolor='none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("u")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = v * V * 1000, cmap = 'jet', s = 1, edgecolor='none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("v")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = sxx * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("sxx")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = syy * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("syy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = sxy * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("txy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = exx * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("exx")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = eyy * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("eyy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_te, y_te, c = exy * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("gxy")
plt.show()
#%% save results
np.save('X_PINN.npy', x_te)
np.save('Y_PINN.npy', y_te)
np.save('u_PINN.npy', u * U)
np.save('v_PINN.npy', v * V)
np.save('sxx_PINN.npy', sxx * s_0 * 1e-6)
np.save('syy_PINN.npy', syy * s_0 * 1e-6)
np.save('sxy_PINN.npy', sxy * s_0 * 1e-6)
np.save('exx_PINN.npy', exx * e_0)
np.save('exx_PINN.npy', eyy * e_0)
np.save('exy_PINN.npy', exy * e_0)
Code 03
# Copyright <2024> <FAHAD BUTT>
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#%% import necessary libraries
import os
import numpy as np
import random as rn
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
# import torch_directml # enable for AMD (except MPS) and Intel gpu
import glob
device = torch.device("cpu") # cuda for CUDA gpu, cpu for cpu, mps for Apple GPU, torch_directml.device() for any other gpu
#%% covert to polar coordinates for circle
def cartesian_to_polar(x, y, center_x, center_y):
r = torch.sqrt((x - center_x)**2 + (y - center_y)**2) # calcualte radial distance from center to point (x, y)
theta = torch.atan2(y - center_y, x - center_x) # calculate angle (in radians) from positive x-axis to point (x, y)
return r, theta
def compute_polar_stresses(sxx, syy, sxy, theta):
sigma_r = sxx * torch.cos(theta)**2 + syy * torch.sin(theta)**2 + 2 * sxy * torch.cos(theta) * torch.sin(theta) # radial stress
sigma_theta = sxx * torch.sin(theta)**2 + syy * torch.cos(theta)**2 - 2 * sxy * torch.cos(theta) * torch.sin(theta) # hoop stress
tau_rtheta = (syy - sxx) * torch.cos(theta) * torch.sin(theta) + sxy * (torch.cos(theta)**2 - torch.sin(theta)**2) # shear stress (polar)
return sigma_r, sigma_theta, tau_rtheta
#%% 2D linear elasticity equations and boundary conditions
def linear_elasticity_equation(lee):
lee.requires_grad_(True) # watch lee for gradient computation
out = model(lee) # output from model
u = out[:, 0] # x displacement
v = out[:, 1] # y displacement
sxx = out[:, 2] # normal stress x
syy = out[:, 3] # normal stress y
sxy = out[:, 4] # shear stress xy
exx = out[:, 5] # normal strain x
eyy = out[:, 6] # normal strain y
exy = out[:, 7] # shear strain xy
du = torch.autograd.grad(u, lee, torch.ones_like(u), create_graph = True)[0] # du/dlee
dv = torch.autograd.grad(v, lee, torch.ones_like(v), create_graph = True)[0] # dv/dlee
dsxx = torch.autograd.grad(sxx, lee, torch.ones_like(sxx), create_graph = True)[0] # dsxx/dlee
dsyy = torch.autograd.grad(syy, lee, torch.ones_like(syy), create_graph = True)[0] # dsyy/dlee
dsxy = torch.autograd.grad(sxy, lee, torch.ones_like(sxy), create_graph = True)[0] # dsxy/dlee
dexx = torch.autograd.grad(exx, lee, torch.ones_like(exx), create_graph = True)[0] # dexx/dlee
deyy = torch.autograd.grad(eyy, lee, torch.ones_like(eyy), create_graph = True)[0] # deyy/dlee
dexy = torch.autograd.grad(exy, lee, torch.ones_like(exy), create_graph = True)[0] # dexy/dlee
u_x = du[:, 0] # du/dx
u_y = du[:, 1] # du/dy
v_x = dv[:, 0] # dv/dx
v_y = dv[:, 1] # dv/dy
sxx_x = dsxx[:, 0] # dsxx/dx
syy_y = dsyy[:, 1] # dsyy/dy
sxy_x = dsxy[:, 0] # dtxy/dx
sxy_y = dsxy[:, 1] # dtxy/dy
exx_y = dexx[:, 1] # dexx/dy
eyy_x = deyy[:, 0] # deyy/dx
exy_x = dexy[:, 0] # dexy/dx
x_mom = (D * sxx_x) + (L * sxy_y) # x momentum
y_mom = (D * sxy_x) + (L * syy_y) # y momentum
x_kin = exx - u_x # kinematic x
y_kin = eyy - v_y # kinematic y
xy_kin = exy - (0.5 * (u_y + v_x)) # kinematic xy
ps_xx = sxx - (alpha * (((1 - v_ps) * exx) + (v_ps * eyy))) # plane strain xx
ps_yy = syy - (alpha * (((1 - v_ps) * eyy) + (v_ps * exx))) # plane strain yy
ps_xy = sxy - (beta * exy) # plane strain xy
exx_yy = torch.autograd.grad(exx_y, lee, torch.ones_like(exx_y), create_graph=True)[0][:, 1] # d2exx/dy2
eyy_xx = torch.autograd.grad(eyy_x, lee, torch.ones_like(eyy_x), create_graph=True)[0][:, 0] # d2eyy/dx2
exy_xy = torch.autograd.grad(exy_x, lee, torch.ones_like(exy_x), create_graph=True)[0][:, 1] # d2exy/dxdy
comp = ((L / D) * exx_yy) - (2 * exy_xy) + ((D / L) * eyy_xx) # compatibility condition
# apply boundary conditions
u_center = u[(lee[:, 0] == 0)] # x-axis symmetry
v_center = v[(lee[:, 1] == 0)] # y-axis symmetry
sxy_left_bc = sxy[lee[:, 0] == 0] # x = L
sxy_right_bc = sxy[lee[:, 0] == L] # x = L
sxy_bottom_bc = sxy[lee[:, 1] == 0] # y = 0
sxy_top_bc = sxy[lee[:, 1] == D] # y = 0
syy_top_bc = syy[lee[:, 1] == D] # y = D
sxx_right_bc = sxx[lee[:, 0] == L] - 1 # x = L
r, theta = cartesian_to_polar(lee[:, 0], lee[:, 1], center_x, center_y) # convert coordinates to polar
# boundary conditions on hole
hole_bc_mask = torch.abs(r - radius) <= tolerance # check for points inside circle
sigma_r, sigma_theta, tau_rtheta = compute_polar_stresses(sxx, syy, sxy, theta) #calculate stresses in polar coordinates
sigma_r_hole_bc = sigma_r[hole_bc_mask] # sr = 0
tau_rtheta_hole_bc = tau_rtheta[hole_bc_mask] # srtheta = 0
return x_mom, y_mom, x_kin, y_kin, xy_kin, ps_xx, ps_yy, ps_xy, comp, \
sxy_left_bc, sxy_right_bc, sxy_bottom_bc, sxy_top_bc, \
syy_top_bc, \
sxx_right_bc, \
u_center, v_center, \
sigma_r_hole_bc, tau_rtheta_hole_bc
#%% define loss function
def loss_fn(lee):
global u_bc_loss, v_bc_loss, sxx_bc_loss, sxy_bc_loss, syy_bc_loss, sigma_r_hole_bc_loss, tau_rtheta_hole_bc_loss
lee = lee.clone().detach().requires_grad_(True).to(device) # move to "device" and use x32 data type
x_mom, y_mom, x_kin, y_kin, xy_kin, ps_xx, ps_yy, ps_xy, comp, \
sxy_left_bc, sxy_right_bc, sxy_bottom_bc, sxy_top_bc, \
syy_top_bc, \
sxx_right_bc, \
u_center, v_center,\
sigma_r_hole_bc, tau_rtheta_hole_bc = linear_elasticity_equation(lee) # compute residuals for equations with specified boundary conditions
x_mom_loss = nn.MSELoss()(x_mom, torch.zeros_like(x_mom)) # x momentum loss
y_mom_loss = nn.MSELoss()(y_mom, torch.zeros_like(y_mom)) # y momentum loss
x_kin_loss = nn.MSELoss()(x_kin, torch.zeros_like(x_kin)) # kinematic equation loss x
y_kin_loss = nn.MSELoss()(y_kin, torch.zeros_like(y_kin)) # kinematic equation loss y
xy_kin_loss = nn.MSELoss()(xy_kin, torch.zeros_like(xy_kin)) # kinematic equation loss xy
ps_xx_loss = nn.MSELoss()(ps_xx, torch.zeros_like(ps_xx)) # plane strain loss xx
ps_yy_loss = nn.MSELoss()(ps_yy, torch.zeros_like(ps_yy)) # plane strain loss yy
ps_xy_loss = nn.MSELoss()(ps_xy, torch.zeros_like(ps_xy)) # plane strain loss xy
comp_loss = nn.MSELoss()(comp, torch.zeros_like(comp)) # compatibility loss
u_bc_loss = nn.MSELoss()(u_center, torch.zeros_like(u_center)) # x symmetry loss
v_bc_loss = nn.MSELoss()(v_center, torch.zeros_like(v_center)) # y symmetry loss
sxx_bc_loss = nn.MSELoss()(sxx_right_bc, torch.zeros_like(sxx_right_bc)) # sxx boundary condition loss
sxy_bc_loss = nn.MSELoss()(sxy_left_bc, torch.zeros_like(sxy_left_bc)) + nn.MSELoss()(sxy_right_bc, torch.zeros_like(sxy_right_bc)) \
+ nn.MSELoss()(sxy_bottom_bc, torch.zeros_like(sxy_bottom_bc)) + nn.MSELoss()(sxy_top_bc, torch.zeros_like(sxy_top_bc)) # sxy boundary condition loss
syy_bc_loss = nn.MSELoss()(syy_top_bc, torch.zeros_like(syy_top_bc)) # syy boundary condition loss
sigma_r_hole_bc_loss = nn.MSELoss()(sigma_r_hole_bc, torch.zeros_like(sigma_r_hole_bc)) # sr boundary condition loss
tau_rtheta_hole_bc_loss = nn.MSELoss()(tau_rtheta_hole_bc, torch.zeros_like(tau_rtheta_hole_bc)) # srtheta boundary condition loss
bc_loss = (tau_rtheta_hole_bc_loss + sigma_r_hole_bc_loss + \
u_bc_loss + v_bc_loss + sxx_bc_loss + sxy_bc_loss + syy_bc_loss) / 200 # total boundary condition loss
mse_loss = (x_mom_loss + y_mom_loss + \
+ x_kin_loss + y_kin_loss + xy_kin_loss + \
ps_xx_loss + ps_yy_loss + ps_xy_loss + comp_loss) / 2500 # total equation loss
total_loss = mse_loss + bc_loss # total loss
return total_loss
#%% save and resume training
def save_checkpoint(epoch, model, optimizer): # save check point
checkpoint = {
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()}
checkpoint_path = os.path.join(checkpoint_dir, f"checkpoint_epoch_{epoch}.pt")
torch.save(checkpoint, checkpoint_path)
print(f"Checkpoint saved at epoch {epoch}")
def load_checkpoint(model, optimizer): # load check point
latest_checkpoint = max(glob.glob(os.path.join(checkpoint_dir, 'checkpoint_epoch_*.pt')), key=os.path.getctime)
checkpoint = torch.load(latest_checkpoint)
epoch = checkpoint['epoch']
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
print(f"Checkpoint loaded from epoch {epoch}")
return model, optimizer, epoch
#%% show model summary
def count_parameters(model):
return sum(p1.numel() for p1 in model.parameters() if p1.requires_grad) # show trainable parameters
#%% set same seeds for initialization
np.random.seed(20) # same random initializations every time numpy
rn.seed(20) # same random initializations every time for python
torch.manual_seed(20) # set seed for reproducibility
#% generate sample points (grid)
s_0 = 1e7 # applied load [Pa]
E = 2e11 # Young's modulus [Pa]
v_ps = 0.3 # Poisson's ratio
L = 0.5 # Length of plate [m]
D = 0.25 # Width of plate [m]
E_0 = E / s_0 # non dimensional Young's modulus
U = s_0 * L / E # scaled u displacement
V = s_0 * D / E # scaled v displacement
e_0 = s_0 / E # non dimensional strain
alpha = 1 / ((1 + v_ps) * (1 - (2 * v_ps))) # non dimensional parameter
beta = 1 / (2 * (1 + v_ps)) # non dimensional parameter
center_x = 0 # center of hole x coordinate
center_y = 0 # center of hole x coordinate
radius = 0.05 #radius of hole
num_training_points = 50 # per length
num_boundary_points = 9 # points on circle
angles = np.linspace(0, np.pi / 2, num_boundary_points) # angles for each point
# x and y coordinates for circle boundary
x_boundary = center_x + radius * np.cos(angles)
y_boundary = center_y + radius * np.sin(angles)
# generate the original points (as in your code)
x_l = np.random.uniform(0, 0, num_training_points) # left boundary
y_l = np.random.uniform(0, D, num_training_points)
x_r = np.random.uniform(L, L, num_training_points) # right boundary
y_r = np.random.uniform(0, D, num_training_points)
x_b = np.random.uniform(0, L, num_training_points) # bottom boundary
y_b = np.random.uniform(0, 0, num_training_points)
x_t = np.random.uniform(0, L, num_training_points) # top boundary
y_t = np.random.uniform(D, D, num_training_points)
x_m = np.random.uniform(0, L, 2500) # mid points
y_m = np.random.uniform(0, D, 2500)
# Concatenate all points including the boundary points
x_tr = np.concatenate([x_l, x_r, x_b, x_t, x_m, x_boundary]) # add boundary points
y_tr = np.concatenate([y_l, y_r, y_b, y_t, y_m, y_boundary]) # add boundary points
distance = np.sqrt((x_tr - center_x) ** 2 + (y_tr - center_y) ** 2) # distance of each point from center
outside_circle = distance >= radius # filter points inside circle
x_filtered = x_tr[outside_circle]
y_filtered = y_tr[outside_circle]
tolerance = 0.001 # tolerance to mark circumference
on_hole_boundary = (np.abs(distance - radius) <= tolerance) # mark circumference
x_hole = x_tr[on_hole_boundary] # hole x coordinates
y_hole = y_tr[on_hole_boundary] # hole y coordinates
x_train = np.concatenate([x_filtered, x_hole])
y_train = np.concatenate([y_filtered, y_hole])
#%% display mesh
plt.figure(dpi = 300)
# plt.scatter(x_hole, y_hole, s = 2, c = 'r', alpha = 1) # scatter plot
plt.scatter(x_train, y_train, s = 0.1, c = 'b', alpha = 1) # scatter plot
# plt.scatter(x_filtered, y_filtered, s = 0.1, c = 'k', alpha = 1) # scatter plot
plt.xlim(0, L)
plt.ylim(0, D)
plt.grid(True)
plt.axis('equal')
plt.grid(False)
plt.show()
#%% create neural network model
h = 50 # number of neurons per hidden layer
model = nn.Sequential(
nn.Linear(2, h), # 2 inputs for x and y
nn.Tanh(), # hidden layer
nn.Linear(h, h),
nn.Tanh(), # hidden layer
nn.Linear(h, 8)).to(device) # 3 outputs for 3 stresses, 3 strains, 2 displacements
print(model) # show model details
num_trainable_params = count_parameters(model) # show trainable parameters
print("Number of trainable parameters:", num_trainable_params)
#%% prepare input data
train_points = np.vstack((x_train, y_train)).T # stack x, y together
train_points = torch.tensor(train_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required
#%% train using Adam
resume_training = False # True if resuming training
start_epoch = 1
optimizer = optim.Adam(model.parameters(), lr = 0.00001) # select optimizer and parameters
checkpoint_dir = './pytorch_checkpoints' # directory to save checkpoints
os.makedirs(checkpoint_dir, exist_ok = True) # create a directory to save checkpoints
if resume_training:
model, optimizer, start_epoch = load_checkpoint(model, optimizer)
start_epoch += 1 # start from next epoch
for epoch in range(100001):
optimizer.zero_grad() # clear gradients of all optimized variables
loss_value = loss_fn(train_points) # compute prediction by passing inputs to model
loss_value.backward() # compute gradient of loss with respect to model parameters
optimizer.step() # perform parameter up date
if epoch % 100 == 0:
print(f"{epoch} {loss_value.item()}")
#%% save checkpoint and display loss
save_checkpoint(epoch, model, optimizer) # save model details
print(f"{epoch} {loss_value.item()}") # show loss values
print("Final Loss Values:")
print("u_bc_loss:", u_bc_loss.item())
print("v_bc_loss:", v_bc_loss.item())
print("sxx_bc_loss:", sxx_bc_loss.item())
print("sxy_bc_loss:", sxy_bc_loss.item())
print("syy_bc_loss:", syy_bc_loss.item())
print("sxy_bc_loss:", sxy_bc_loss.item())
print("sigma_r_hole_bc_loss:", sigma_r_hole_bc_loss.item())
#%% prediction
num_testing_points = 500 # per length
# Generate points for boundaries
x_l_te = np.random.uniform(0, 0, num_testing_points) # left boundary
y_l_te = np.random.uniform(0, D, num_testing_points)
x_r_te = np.random.uniform(L, L, num_testing_points) # right boundary
y_r_te = np.random.uniform(0, D, num_testing_points)
x_b_te = np.random.uniform(0, L, num_testing_points) # bottom boundary
y_b_te = np.random.uniform(0, 0, num_testing_points)
x_t_te = np.random.uniform(0, L, num_testing_points) # top boundary
y_t_te = np.random.uniform(D, D, num_testing_points)
x_m_te = np.random.uniform(0, L, 50000)
y_m_te = np.random.uniform(0, D, 50000)
x_te = np.concatenate([x_l_te, x_r_te, x_b_te, x_t_te, x_m_te]) # x co-ordinates
y_te = np.concatenate([y_l_te, y_r_te, y_b_te, y_t_te, y_m_te]) # y co-ordinates
distance_te = np.sqrt((x_te - center_x) ** 2 + (y_te - center_y) ** 2) # distance of each point from center
outside_circle_te = distance_te >= radius # filter out points inside circle
x_filtered_te = x_te[outside_circle_te]
y_filtered_te = y_te[outside_circle_te]
test_points = np.vstack((x_filtered_te, y_filtered_te)).T # stack x, y together
test_points = torch.tensor(test_points, dtype=torch.float32).to(device) # convert to tensor and move training points to GPU / CPU, use float64 if more precision is required
predict = model(test_points).detach().cpu().numpy() # predict and move back to CPU
u = predict[:, 0]
v = predict[:, 1]
sxx = predict[:, 2]
syy = predict[:, 3]
sxy = predict[:, 4]
exx = predict[:, 5]
eyy = predict[:, 6]
exy = predict[:, 7]
von_mises = np.sqrt(sxx**2 + syy**2 - (sxx * syy) + 3 * sxy**2) # von-mises stress
resultant_displacement = np.sqrt((2 * u * U * 1000)**2 + (2 * v * V * 1000)**2) # resultant displacement
equivalent_strain = np.sqrt((2 / 3) * (exx**2 + eyy**2 + (2 * exy**2))) # equivalent strain
average_stress = (sxx + syy) / 2
radius_stress = np.sqrt(((sxx - syy) / 2)**2 + sxy**2)
sigma_1 = average_stress + radius_stress # 1st principal stress
sigma_3 = average_stress - radius_stress # 3rd principal stress
# plotting
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = 2 * u * U * 1000, cmap = 'jet', s = 1, edgecolor='none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("u")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = 2 * v * V * 1000, cmap = 'jet', s = 1, edgecolor='none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("v")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = sxx * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("sxx")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = syy * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("syy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = sxy * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("txy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = exx * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("exx")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = eyy * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("eyy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = exy * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("gxy")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = von_mises * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("von-mises")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = equivalent_strain * e_0, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("equivalent_strain ")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = resultant_displacement, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("resultant_displacement ")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = sigma_1 * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("sigma_1 ")
plt.show()
plt.figure(dpi = 300)
sc = plt.scatter(x_filtered_te, y_filtered_te, c = sigma_3 * s_0 * 1e-6, cmap = 'jet', s = 1, edgecolor = 'none', alpha = 1)
plt.colorbar(orientation = 'horizontal')
plt.gca().set_aspect('equal', adjustable = 'box')
plt.axis('off')
plt.title("sigma_3 ")
plt.show()
#%% save results
np.save('X_PINN.npy', x_te)
np.save('Y_PINN.npy', y_te)
np.save('u_PINN.npy', 2 * u * U)
np.save('v_PINN.npy', 2 * v * V)
np.save('sxx_PINN.npy', sxx * s_0 * 1e-6)
np.save('syy_PINN.npy', syy * s_0 * 1e-6)
np.save('sxy_PINN.npy', sxy * s_0 * 1e-6)
np.save('exx_PINN.npy', exx * e_0)
np.save('exx_PINN.npy', eyy * e_0)
np.save('exy_PINN.npy', exy * e_0)
Thank you for reading!