Make sure we are familiar with Cost/Loss function and Gradient, before you go thru this blog
Gradient descent which we will also called Optimization.
We have seen how to calculate Linear regression and also how to calculate accuracy.As we know to get best accuracy we are using Gradient descent by reducing slope and Intercept/error, Lets see how we can reduce the slope and error to archive best accuracy.
We already derived Y = mX+c in one of our article, when we want to get best accuracy we need to reduce the error and get the best coefficient/slope values, to get the best values we use stochastic gradient descent algorithm.
Now lets derive the above equations for the below dataset
First Iteration:
In first iteration we will take m and c values as 0, now lets derive our equation
Y = mX+c.
Lets calculate Y for 1 from our given dataset
Y = 0*1+0 = 0
Now the values for m = 0.01 and c = 0.01 now with this values calculate the Y value for second dataset where Y = 3.
We Iterated our whole dataset 20 times which means four epoch(we have 5 observations 20 iterations means 20/5 = 4 means four epoch) we will discuss more on this in our future blogs.
Download excel sheet from here
We have done enough Iterations now we can take the last values and build the model with m = 0.79044, c = 0.230897491