728x90
반응형

In the previous tutorial, we learned how to create a single-layer neural network model without coding.

In this tutorial, we will learn how to create a single-layer perceptron model with pytho.

In this section, I won't use any library and framework.

Let's create an artificial neural network model step by step.

 

Firstly, we need to define inputs, weights, output values.

Inputs consist first layer of the neural network model.

But the input layer is not treated as a layer.

# Inputs
x1 = 0.3
x2 = 0.5

# Weights
w1 = 0.2
w2 = 0.1

# Output
y = 1

 

Forward Propagation

 

Step 1: Let's multiply the values of the weights by the input values and then add them.

z = w1*x1 + w2*x2
print("z : ", z)    # z :  0.11

 

Step 2: We need to apply the sigmoid function to the z value.

Because the output value must be between ar range 0-1.

If we do not apply the activation function to the z value, the output value can be a large number.

import math
def sigmoid(x):
    return 1/(1 + math.exp(-z))
a = sigmoid(z)
print("a: ", a)     #a:  0.5274723043445937

 

Step 3: Finally we will find error value.

y_head = a
error = 0.5*(y - y_head)
print("error : ", error)    #error :  0.23626384782770316

 

Backward Propagation

 

Feedback neural networks can be more difficult to understand than feed-forward neural networks.

Because we use the derivative function in feedback artificial neural networks.

If you have no information about derivative, you can get information about derivative by using this link.

 

Step 1: Find the derivative of the error function.

Using this you can learn how to calculate this equetion.

# Step_1
# derivative of error with respect to w1
d_error = (y_head - y)  #derivative error
d_a = a*(1 - a)         #derivative a
d_z_w1 = x1             #derivative z
d_z_w2 = x2             #derivative z

print("d_error :", d_error) #d_error : -0.4725276956554063
print("d_a : ", d_a)        #d_a :  0.24924527249399803
print("d_z_w1 :", d_z_w1)   #d_z_w1 : 0.3
print("d_z_w2 :", d_z_w2)   #d_z_w2 : 0.5

 

Step 2: Find the derivative of error with respect to w1 multiplying values.

#Step_2
d_error_w1 = d_error*d_a*d_z_w1
print("d_error_w1: ", d_error_w1)   #d_error_w1:  -0.03533258827937781

 

Step 3: Find the derivative of error with respect to w2 multiplying values.

#Step_3
d_error_w2 = d_error*d_a*d_z_w2
print("d_error_w2: ", d_error_w2)   #d_error_w2:  -0.058887647132296356

 

Step 4: Update w1 and w2 with new weights values.

#Step_4
w1 = w1 - d_error_w1
w2 = w2 - d_error_w2
print("new w1 : ", w1)  #new w1 :  0.2353325882793778
print("new w2 : ", w2)  #new w2 :  0.15888764713229636
# Inputs
x1 = 0.3
x2 = 0.5

# Weights
w1 = 0.2
w2 = 0.1

# Output
y = 1

"""Forward Propagation"""
#Step_1
z = w1*x1 + w2*x2
print("z : ", z)    # z :  0.11

#Step_2
import math
def sigmoid(x):
    return 1/(1 + math.exp(-z))
a = sigmoid(z)
print("a: ", a)     #a:  0.5274723043445937

#Step_3
y_head = a
error = 0.5*(y - y_head)
print("error : ", error)    #error :  0.23626384782770316

""" BackPropagation """

# Step_1
# derivative of error with respect to w1
d_error = (y_head - y)  #derivative error
d_a = a*(1 - a)         #derivative a
d_z_w1 = x1             #derivative z
d_z_w2 = x2             #derivative z

print("d_error :", d_error) #d_error : -0.4725276956554063
print("d_a : ", d_a)        #d_a :  0.24924527249399803
print("d_z_w1 :", d_z_w1)   #d_z_w1 : 0.3
print("d_z_w2 :", d_z_w2)   #d_z_w2 : 0.5 

#Step_2
d_error_w1 = d_error*d_a*d_z_w1
print("d_error_w1: ", d_error_w1)   #d_error_w1:  -0.03533258827937781

#Step_3
d_error_w2 = d_error*d_a*d_z_w2
print("d_error_w2: ", d_error_w2)   #d_error_w2:  -0.058887647132296356

#Step_4
w1 = w1 - d_error_w1
w2 = w2 - d_error_w2
print("new w1 : ", w1)  #new w1 :  0.2353325882793778
print("new w2 : ", w2)  #new w2 :  0.15888764713229636

 

In the next tutorial, we will see how to create a multi-layer neural network.

728x90
반응형

'Deep Learning' 카테고리의 다른 글

Numpy  (0) 2022.09.12
How To Build An Artificial Neural Network With Python  (0) 2022.09.12
How to create A single Layer Perceptron?  (0) 2022.09.12
Linear Functions  (0) 2022.09.11
CNN  (0) 2022.09.08

+ Recent posts