Here is a code I wrote on MATLAB using the online tutorials for machine learning for regression. In the current form, the code can predict quadratic, cubic and periodic functions with considerable accuracy! Like [Introduction and Feed Forward Back Propagating Neural Network], I tried to add as many comments as possible. The predict portion of the code will not have back propagation. I still intend to replace for with while. If ever. π
This has 3 hidden layers, making it "deep" neural network. so far, I have not seen any benefit of adding more hidden layers. I also don't know how to make neural network get out of local minima.
clear; clc;
%% traing data %%
x0=[-0*pi:pi/16:2*pi].'; %input data
y0=[sin(x0)]; %expected output data
x=x0./max(x0); %normalized inputs
y=y0./max(y0); %normalized outputs
%% size of neural network %%
inputlayersize = 1; %input layers
outputlayersize = 1; %output layers
firsthiddenlayersize = 5; %hidden layers
secondhiddenlayersize = 5; %hidden layers
thirdhiddenlayersize = 5; %hidden layers
%% weight and bias initialize; learning rate %%
w1=rand(inputlayersize, firsthiddenlayersize)-0.5; %weights input to hidden
w2=rand(firsthiddenlayersize,secondhiddenlayersize)-0.5; %weights hidden to output
w3=rand(secondhiddenlayersize,thirdhiddenlayersize)-0.5; %weights hidden to output
w4=rand(thirdhiddenlayersize,outputlayersize)-0.5; %weights hidden to output
b1=rand(1,firsthiddenlayersize)-0.5; %bias input to hidden
b2=rand(1,secondhiddenlayersize)-0.5; %bias hidden to output
b3=rand(1,thirdhiddenlayersize)-0.5; %bias hidden to output
b4=rand(1,outputlayersize)-0.5; %bias hidden to output
lr = 0.1; %learning rate
%% neural network %%
for i=1:100000
z2=x*w1+b1;
a2=activation(z2); %hidden layer 1
z3=a2*w2+b2;
a3=activation(z3); %hidden layer 2
z4=a3*w3+b3;
a4=activation(z4); %hidden layer 3
z5=a4*w4+b4;
yhat=activation(z5); %final output (normalized)
delta5=-(y-yhat).*activationprime(z5);
delta4=delta5*w4.'.*activationprime(z4);
delta3=delta4*w3.'.*activationprime(z3);
delta2=delta3*w2.'.*activationprime(z2);
DJW4= a4.'*delta5; %error hidden3 to output
DJW3= a3.'*delta4; %error hidden2 to hidden3
DJW2= a2.'*delta3; %error hidden1 to hidden2
DJW1= x.'*delta2; %error input to hidden1
w1=w1-(lr*DJW1); %updated weights input to hidden1
w2=w2-(lr*DJW2); %updated weights hidden1 to hidden2
w3=w3-(lr*DJW3); %updated weights hidden2 to hidden3
w4=w4-(lr*DJW4); %updated weights hidden3 to output
b1=b1-(lr*mean(delta2)); %updated bias input to hidden1
b2=b2-(lr*mean(delta3)); %updated bias hidden1 to hidden2
b3=b3-(lr*mean(delta4)); %updated bias hidden2 to hidden3
b4=b4-(lr*mean(delta5)); %updated bias hidden3 to output
end
%% plotting %%
yhat0=yhat.*max(y0); %final outputs
hold on; grid on; box on, grid minor
set(gca,'FontSize',40)
set(gca, 'FontName', 'Times New Roman')
ylabel('Cl Cd','FontSize',44)
xlabel('AoA [{\circ}]','FontSize',44)
%xlim([0 1])
%xlim([0 30])
plot(x0,y0(:,1),'--','color',[0 0 0],'LineWidth',2,'MarkerSize',10)
plot(x0,yhat0(:,1),'o','color',[1 0 0],'LineWidth',2,'MarkerSize',20)
set(0,'DefaultLegendAutoUpdate','off')
legend({'Training Data','Neural Network Output'},'FontSize',44,'Location','Northwest')
%plot(x0,y0(:,2),'--','color',[0 0 0],'LineWidth',2,'MarkerSize',10)
%plot(x0,yhat0(:,2),'o','color',[1 0 0],'LineWidth',2,'MarkerSize',20)
%% activation %%
function [s]=activation(z)
%s=1./(1+exp(-z));
s=tanh(z);
end
%% derivative of activation %%
function [s]=activationprime(z)
%s=(exp(-z))./((1+exp(-z))).^2;
s=(sech(z)).^2;
end
Lets see what the future bring! If you want to collaborate on the research projects related to turbo-machinery, aerodynamics, renewable energy and well, machine learning please reach out. Thank you very much for reading!
No comments:
Post a Comment