SolidWorks Videos

Sunday, January 14, 2024

      After successfully creating my own neural network from scratch using math and calculus only [here], I have taught myself to do the same in TensorFlow using CPU only. So one can run this without CUDA GPU. I have added comments; if the readers are like me and migrating from MATLAB; the comments will help, probably πŸ˜•. The results are shown in Fig. 1.

# import libraries

import numpy as np

import matplotlib.pyplot as plt

import tensorflow as tf

from tensorflow.keras import layers, models, optimizers, callbacks


# generate data

x = np.linspace(0, 1 * np.pi, round((1 * np.pi) / (np.pi / 16))).reshape(-1, 1)  # reshape is like x' in MATLAB, create neural network inputs from 0 to pi, with spacing of pi/16

y = x**2 * np.sin(x) # the function we want neural network to learn


# create neural network

tf.random.set_seed(42) # initialize neural network weights and biases to same values everytime to ensure reproducibility

model = models.Sequential() # create feed forward back propagating neural network

model.add(layers.Dense(5, activation='sigmoid', input_shape=(1,))) # 1st hidden layer, input_shape=(1,) means input must be column vector

model.add(layers.Dense(1)) # output layer, no activation in case of regression problems

custom_optimizer = optimizers.Adam(learning_rate=0.1) # learning rate for Adam optimizer

model.compile(optimizer=custom_optimizer, loss='mean_squared_error') # choose loss function

early_stopping = callbacks.EarlyStopping( # stopping criterion 

    monitor='loss', # monitor loss

    min_delta=0.0001, # minimum change to qualify as an improvement

    patience=100, # number of epochs with no improvement to stop training

    mode='min' # stop when the loss has stopped decreasing

)


# train the model

model.fit(x, y, epochs=1000, verbose=0, callbacks=[early_stopping]) 


# make predictions

x_predict = np.linspace(0, 1 * np.pi, round((1 * np.pi) / (np.pi / 31))).reshape(-1, 1) # create prediction data, entirely different from training data

y_predict = model.predict(x_predict) # neural network prediction


# plotting

plt.figure(dpi=300) # image resolution

plt.scatter(x, y, label='Actual Data', facecolors='none', edgecolors='red', alpha=1)

plt.plot(x_predict, y_predict, color='blue', label='Neural Network Predictions')

plt.xlabel('x')

plt.ylabel('f(x)')

plt.legend()

plt.show()


Fig. 1, Comparison of results

     Results are compared between TensorFlow and my own code [here] Fig. 2. For a simple case shown, TensorFlow takes 2.5s from pressing the run button to plotting. My own code takes 0.5 s. It's neck and neck 🀣. Obviously, learning rate, hidden layers, hidden layer neurons, epochs and optimizers are same in both cases.


Fig. 2, Own code, VS analytical VS TF

     If you want to hire me as your awesome PhD student, please reach out! Thank you very much for reading!

No comments:

Post a Comment