The Simplest Guide to Neural Networks.

Ryan Rana
7 min readJul 25, 2021

--

Photo by Jeremy Bishop on Unsplash

Artificial Intelligence is constantly being utilized in new and innovative ways every day. Increases in safety, productivity, and efficiency as a result of AI applications are already benefiting our world, and the best is yet to come! There are a lot of different ways to use AI and therefore a lot of ways to create it but at the core of all of them are Neural Networks.

What is a Neural Network?

A neural network is a simple framework for Machine Learning that is designed after the human brain. It allows for a program to take in certain data inputs and run some sort of calculation on them to produce an output. Just like a brain processes information to deliver an output. The input comes in the form of a neuron and they are connected through a series of synapses. Humans have evolved over millions of years to get this sort of result and now you have the power to do the same with a computer.

How does a Neural Network work?

A simple neural network has 3 types of layers, the input layer, the hidden layer, and the output layer. All the circles represent neurons and the lines connecting are synapses. If there are multiple hidden layers that are called deep learning (another common buzzword with a simple definition).

Inputs are taken as a series of numbers often times a matrix (you learned about this in pre-algebra).

To get to the output layer the numbers have to be calculated through a series of layers.

The numbers are multiplied through specific values (they are called weights).

Finally, it returns an output (a prediction).

Now you may be asking yourself, how does the computer know what the values of these weights are. This is done by a process called training.

When creating an Artificial Intelligence program the computer would need to make predictions based on data. So one would need to import a data file (via. Excel, CSV, or XML) containing not only input data but also the corresponding outputs. The data points are then put into their places within a neural network and then simple algebra would figure out the numbers need to get from input to output. Those are weights and all of them are averaged out. So then when a new prediction must be made the inputs would just need to be processed. The more data we use to train, the more accurate our results will be.

Making a Neural Network

Source Code: https://github.com/RyanRana/Simple-Neural-Network

Now that you have the basic knowledge of how a neural network works, you can actually build one with python. The problem we are going to tackle is as follows:

How many hours should one practice basketball and how many hours should one work out to score the most points in a basketball game?

Essentially in the network, we will be predicting the points scored in a game based on the inputs of how many hours we practiced and how many hours we worked out.

Libraries and Tools

Now to make a neural network you can’t just use plain Python, you need certain tools and libraries to help simplify the process. The necessary libraries are as follows.

  1. Numpy: a tool to work with matrices.
  2. Matplotlib: a tool to graph and visualize data

To install the listed libraries open your computer’s terminal (aka command line) and type in the following commands and press enter,

pip install numpy as np

and then

pip install matplotlib as matplotlib.

If that doesn’t work you can try using pip3. PIP is just a command-line tool to download all sorts of software. You can read about it here, though it is not necessarily for learning about AI.

Importing Data

You have the tools downloaded so now you can actually start coding, so open up a new python file on whatever IDLE you would like. Here is the data that is being used:

This is a small amount of data so we don’t need to put it on an external spreadsheet file, we could just represent it as a NumPy list, and here is how you put it in your program.

import numpy as np

That is how you allow for library usage in your program, now we can actually use these tools. Let’s have two variables, X is for inputs (hours practicing, hours working) and Y is for outputs (points).

X = np.array(([1, 0], [2, 4], [2, 2], [6, 0]), dtype=float)

Y = np.array(([3], [40], [24], [20]), dtype=float).T

The T at the end just makes it part of the matrix.

Visualize the data

Given that the data has been imported in we can now use MyPlotlib to visualize our data. Often times it makes data science easier when you can convert random numbers into something visual.

import matplotlib.pyplot at plt

Then you can use the following lines to create a graph

plt.plot(X,Y)

plt.show

These two lines deliver the following image,

Given that this is random data I made up the lines that are a bit funky but the general gist is there. The orange is the correlation between working out and points while the blue is the correlation between practicing and points. The matplotlib library can let you do a lot of different things visually for example you can add headings, change colors, etc.

Now let’s jump back into the NumPy stuff.

Create random weights

In order to perfect the weights, we first have to generate random weights and slowly edit them to be as accurate as possible. To create random weights use the following command,

np.random.seed(1)

synaptic_weights = 2*(np.random.random((2,1)))-1

The (2,1) part was because there are two input columns and 1 output column. The 2* and -1 at opposite ends of the equation is unnecessary in the grand scheme of things but it is common practice and it makes the weights slightly closer to the most accurate result.

Train the Weights

This is where the fun begins, you now have the random weights and can actually perfect them over many training iterations. To do that we have to first create a simple for loop.

for iteration in range(100000):

input_layer = X

outputs = np.dot(input_layer,synaptic_weights)

The 100000 interactions are just a large number of iterations, the more times you edit the weights the better they will be, but too much will make the program slow and at certain times if it is too slow then it will be detrimental. For example, if a self-driving car’s AI program is too slow the car can get ruined and hurt someone.

The np. dot is basically just matrix multiplication. But this is still just a linear matrix and based on the visuals this is clearly a non-linear problem set, so we need an activation function to introduce non-linearity to our work.

Introduce Sigmoid.

A sigmoid function is best described by Thomas Wood on his DeepAI blog, “A Sigmoid function is a mathematical function which has a characteristic S-shaped curve”. It essentially looks like the following:

And it is mathematically represented as such

function sigmoid(x):

1/(1+np.exp(-x))

Now we take the output from the for loop as a parameter for sigmoid as such,

outputs = sigmoid(outputs)

Edit the weights.

Given that the random weights have been represented as outputs we know have to edit the weights to make them more accurate, but you may be asking how much we have to edit the weights and for that, we have to calculate the difference between the given outputs and the results we just came up with,

error = Y - outputs

Now that we have the error we have added that to the derivative of sigmoid of the output which can be represented as such,

adjusment = error + 1/(1+np.exp(outputs))

Now it is finally time to edit the weights,

synaptic_weights += np.dot(input_layer.T,adjustments)

Predictions.

You have actually finished calculating your weights and therefore building your neural network, now you can actually take in predictions. First off you can create an input so that when your run your program the user can actually input their data.

userpractice = float(input(“How many hours of practice: ”))

userworking = float(input(“How many hours of working out: ”))

Now we have to convert these values into NumPy array,

prediction = np.array([userpractice,userworking]).T

By multiplying the predictions against the weights you will get your outputs.

output = np.dot(predection,synaptic_weights)

Conclusion.

Congratulations on creating your first Neural Network project. You have implement libraries, calculated sigmoids, and made an actual prediction maker. This is the first step in a long but exciting journey. Best of luck to you on your programming adventure!

Source Code: https://github.com/RyanRana/Simple-Neural-Network

Linkedin: https://www.linkedin.com/in/ryan-rana-544b761b3/

Github: https://github.com/RyanRana

Works Cited

“Build a Neural Network with Python.” Enlight, enlight.nyc/projects/neural-network.

“Create a Simple Neural Network in Python from Scratch.” YouTube, YouTube, 31 Mar. 2018, www.youtube.com/watch?v=kft1AJ9WVDk.

“A Neural Network in 11 Lines of Python.” KDnuggets, www.kdnuggets.com/2015/10/neural-network-python-tutorial.html.

“Pip (Package Manager).” Wikipedia, Wikimedia Foundation, 2 May 2021, en.wikipedia.org/wiki/Pip_(package_manager).

“Sigmoid Function.” DeepAI, 27 Sept. 2020, deepai.org/machine-learning-glossary-and-terms/sigmoid-function.

stevenmiller888. “Steven Miller.” Mind: How to Build a Neural Network (Part One), 10 Aug. 2015, stevenmiller888.github.io/mind-how-to-build-a-neural-network/.

“A Super Simple Introduction to Neural Networks.” RSS, www.mattzeunert.com/2016/12/09/neural-networks-super-simple-introduction.html.

--

--

No responses yet