SolidWorks Videos

Friday, September 22, 2023

Machine Learning (ML) Accelerated Computational Fluid Dynamics (CFD) Attempt

     An attempt to model ML πŸ§ πŸ’» accelerated CFD 🌬. Idea was to make CFD with 10x10 grid and show results on 100x100 or even more fine grid. Needs a better ML algorithm. This uses Deep Feed Forward Back Propagating Neural Network. Can be a good topic for my little PhD. Please reach out if you want to collaborate!


%% clear and close

close all

clear

clc


%% define spatial and temporal grids

h = 1/10; % grid spacing

cfl = h*1; % cfl number

L = 1; % cavity length

D = 1; % cavity depth

Nx = round((L/h)+1); % grid points in x-axis

Ny = round((D/h)+1); % grid points in y-axis

nu = 0.000015111; % kinematic viscosity

Uinf = 0.0015111; % free stream velocity / inlet velocity  / lid velocity

dt = h*cfl/Uinf; % time step

travel = 2; % times the disturbance travels entire length of computational domain

TT = travel*L/Uinf; % total time

ns = TT/dt; % number of time steps

l_square = 1; % length of square

Re = l_square*Uinf/nu; % Reynolds number

rho = 1.2047; % Initial fluid density


%% initialize flowfield

u = zeros(Nx,Ny); % x-velocity

v = zeros(Nx,Ny); % y-velocity

p = zeros(Nx,Ny); % pressure

i = 2:Nx-1; % spatial interior nodes in x-axis

j = 2:Ny-1; % spatial interior nodes in y-axis

[X, Y] = meshgrid(0:h:L, 0:h:D); % spatial grid

maxNumCompThreads('automatic'); % select CPU cores


%% solve 2D Navier-Stokes equations

for nt = 1:ns

    p(i, j) = ((p(i+1, j) + p(i-1, j)) * h^2 + (p(i, j+1) + p(i, j-1)) * h^2) ./ (2 * (h^2 + h^2)) ...

        - h^2 * h^2 / (2 * (h^2 + h^2)) * (rho * (1 / dt * ((u(i+1, j) - u(i-1, j)) / (2 * h) + (v(i, j+1) - v(i, j-1)) / (2 * h)))); % pressure poisson

    p(1, :) = p(2, :); % dp/dx = 0 at x = 0

    p(Nx, :) = p(Nx-1, :); % dp/dx = 0 at x = L

    p(:, 1) = p(:, 2); % dp/dy = 0 at y = 0

    p(:, Ny) = 0; % p = 0 at y = D

    u(i, j) = u(i, j) - u(i, j) * dt / (2 * h) .* (u(i+1, j) - u(i-1, j)) ...

        - v(i, j) * dt / (2 * h) .* (u(i, j+1) - u(i, j-1)) - dt / (2 * rho * h) * (p(i+1, j) - p(i-1, j)) ...

        + nu * (dt / h^2 * (u(i+1, j) - 2 * u(i, j) + u(i-1, j)) + dt / h^2 * (u(i, j+1) - 2 * u(i, j) + u(i, j-1))); % x-momentum

    u(1, :) = 0; % u = 0 at x = 0

    u(Nx, :) = 0; % u = 0 at x = L

    u(:, 1) = 0; % u = 0 at y = 0

    u(:, Ny) = Uinf; % u = Uinf at y = D

    v(i, j) = v(i, j) - u(i, j) * dt / (2 * h) .* (v(i+1, j) - v(i-1, j)) ...

        - v(i, j) * dt / (2 * h) .* (v(i, j+1) - v(i, j-1)) - dt / (2  * rho * h) * (p(i, j+1) - p(i, j-1)) ...

        + nu * (dt / h^2 * (v(i+1, j) - 2 * v(i, j) + v(i-1, j)) + dt / h^2 * (v(i, j+1) - 2 * v(i, j) + v(i, j-1))); % y-momentum

    v(1, :) = 0; % v = 0 at x = 0

    v(Nx, :) = 0; % v = 0 at x = L

    v(:, 1) = 0; % v = 0 at y = 0

    v(:, Ny) = 0; % v = 0 at y = D

end


%% post-processing

velocity_magnitude = sqrt(u.^2 + v.^2); % Velocity magnitude

data = zeros(Nx*Ny, 4); % 5 columns: X, Y, U, P

counter = 0;

for ii = 1:Nx

    for jj = 1:Ny

        counter = counter + 1;

        data(counter, 1) = X(ii, jj); % x-coordinate

        data(counter, 2) = Y(ii, jj); % y-coordinate

        data(counter, 3) = velocity_magnitude(ii, jj); % velocity magnitude

        data(counter, 4) = p(ii, jj); % pressure

    end

end


%% import data to neural network


x = normalize(data(:,1:2));

y = normalize(data(:,3:4));



%% weight and bias initialize; learning rate


inputlayersize = 2; % input layers

outputlayersize = 2; % output layers

firsthiddenlayersize = 10; % hidden layers

secondhiddenlayersize = 10; % hidden layers

thirdhiddenlayersize = 10; % hidden layers


%% weight and bias initialize; learning rate


w1=zeros(inputlayersize, firsthiddenlayersize); % weights input to hidden

w2=zeros(firsthiddenlayersize,secondhiddenlayersize); % weights hidden to output

w3=zeros(secondhiddenlayersize,thirdhiddenlayersize); % weights hidden to output

w4=zeros(thirdhiddenlayersize,outputlayersize); % weights hidden to output

b1=zeros(1,firsthiddenlayersize); % bias input to hidden

b2=zeros(1,secondhiddenlayersize); % bias hidden to output

b3=zeros(1,thirdhiddenlayersize); % bias hidden to output

b4=zeros(1,outputlayersize); % bias hidden to output

lr = 0.1; % learning rate


%% training

for i=1:100000

    z2=x*w1+b1;

    a2=activation(z2); % hidden layer 1

    z3=a2*w2+b2;

    a3=activation(z3); % hidden layer 2

    z4=a3*w3+b3;

    a4=activation(z4); % hidden layer 3

    z5=a4*w4+b4;

    yhat=activation(z5); %final output

    delta5=-(y-yhat).*activationprime(z5);

    delta4=delta5*w4.'.*activationprime(z4);

    delta3=delta4*w3.'.*activationprime(z3);

    delta2=delta3*w2.'.*activationprime(z2);

    DJW4= a4.'*delta5; % error hidden3 to output

    DJW3= a3.'*delta4; % error hidden2 to hidden3

    DJW2= a2.'*delta3; % error hidden1 to hidden2

    DJW1= x.'*delta2; % error input to hidden1

    w1=w1-(lr*DJW1); % updated weights input to hidden1

    w2=w2-(lr*DJW2); % updated weights hidden1 to hidden2

    w3=w3-(lr*DJW3); % updated weights hidden2 to hidden3

    w4=w4-(lr*DJW4); % updated weights hidden3 to output

    b1=b1-(lr*mean(delta2)); % updated bias input to hidden1

    b2=b2-(lr*mean(delta3)); % updated bias hidden1 to hidden2

    b3=b3-(lr*mean(delta4)); % updated bias hidden2 to hidden3

    b4=b4-(lr*mean(delta5)); % updated bias hidden3 to output

end


%% testing and plotting

xt = x;

z2=xt*w1+b1;

a2=activation(z2); % hidden layer 1

z3=a2*w2+b2;

a3=activation(z3); % hidden layer 2

z4=a3*w3+b3;

a4=activation(z4); % hidden layer 3

z5=a4*w4+b4;

yhat=activation(z5); % final output

(mean(y)) - (mean(yhat))

hold on; grid on; box on, grid minor

set(gca,'FontSize',40)

set(gca, 'FontName', 'Times New Roman')

ylabel('Pressure [Pa]','FontSize',44)

xlabel('x','FontSize',44)

plot(x,y(:,1),'s','LineWidth',1,'MarkerSize',10,'Color', [0 0 0])

plot(xt,yhat(:,1),'o','LineWidth',1,'MarkerSize',10,'Color', [1 0 0])


%% activation and derivative of activation


function [s]=activation(z)

s = 1 ./ (1 + exp(-z));

% s = tanh(z);

% s = z .* (1 ./ (1 + exp(-z)));

% s = cos(z);

% s = tanh(z) - (1 ./ (1 + exp(-z)));

end


function [s]=activationprime(z)

s = (exp(-z)) ./ ((1 + exp(-z))).^2;

% s = (sech(z)).^2;

% s = (exp(z) .* (1 + exp(z) + z) ) ./ (1 + exp(z)).^2;

% s = -sin(z);

% s = -exp(z) ./ (1 + exp(z)).^2 + (sech(z)).^2;

end


Citation: Cite as: Fahad Butt (2023). Machine Learning (ML) Accelerated Computational Fluid Dynamics (CFD) Attempt (https://whenrobotslove.blogspot.com/2023/09/machine-learning-ml-accelerated.html), When Robots Love!, Retrieved Month Date, Year

No comments:

Post a Comment