SoFunction
Updated on 2024-11-13

python implementation of gradient descent

In this article, the example for you to share the python implementation of the gradient descent method of specific code for your reference, the details are as follows

Use of tools:Python(x,y) 2.6.6
Running environment:Windows10

Problem: Solve y=2*x1+x2+3, i.e., use gradient descent to solve for the optimal values of parameters a, b, and c in y=a*x1+b*x2+c (supervised learning)

Training data:

x_train=[1, 2], [2, 1],[2, 3], [3, 5], [1,3], [4, 2], [7, 3], [4, 5], [11, 3], [8, 7]

y_train=[7, 8, 10, 14, 8, 13, 20, 16, 28,26]

Test data:

x_test = [1, 4],[2, 2],[2, 5],[5, 3],[1,5],[4, 1]

# -*- coding: utf-8 -*-
"""
Created on Wed Nov 16 09:37:03 2016
@author: Jason
"""
 
import numpy as np
import  as plt
 
# y=2 * (x1) + (x2) + 3 
 
rate = 0.001
x_train = ([[1, 2], [2, 1],[2, 3], [3, 5], [1, 3], [4, 2], [7, 3], [4, 5], [11, 3], [8, 7] ])
y_train = ([7, 8, 10, 14, 8, 13, 20, 16, 28, 26])
x_test = ([[1, 4],[2, 2],[2, 5],[5, 3],[1, 5],[4, 1]])
 
a = ()
b = ()
c = ()
 
def h(x):
 return a*x[0]+b*x[1]+c
 
for i in range(100):
 sum_a=0
 sum_b=0
 sum_c=0
 
 for x, y in zip(x_train, y_train):  
  for xi in x:
   sum_a = sum_a+ rate*(y-h(x))*xi
   sum_b = sum_b+ rate*(y-h(x))*xi
   #sum_c = sum_c + rate*(y-h(x)) *1   
   
   a = a + sum_a
   b = b + sum_b
   c = c + sum_c
   ([h(xi) for xi in x_test])
 
 
print(a)
print(b)
print(c)
 
result=[h(xi) for xi in x_train]
print(result)
 
result=[h(xi) for xi in x_test]
print(result)
 
()

Run results:

Conclusion:The line segments are being gradually approximated, the more training data and the more iterations the closer they are to the true value.

This is the whole content of this article.