1. Introduction
We have previously covered the basics of neural networks, whose main role is prediction and classification, so now let's build the first neural network for fitting regression.
2. Neural network construction
2.1 Preparations
To build the fitted neural network and plot it we need to use several libraries of python.
import torch import as F import as plt x = ((-5, 5, 100), dim=1) y = (3) + 0.2 * (())
Since this is a fit, we need some data, of course, and I've chosen to fit the data in the interval within 100 equally spaced points and arrange them into the image of a cubic function.
2.2 Setting up the network
We define a class that inherits a module encapsulated in torch, we first determine the number of neurons in the input layer, hidden layer, and output layer respectively, inherit the parent class and then use the torch in the . () function for the input layer to the hidden layer of the linear transformation, the hidden layer is also linearly transformed into the output layer predict, the next definition of forward propagation function forward (), using relu () as the activation function, and finally output predict () results can be.
class Net(): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() = (n_feature, n_hidden) = (n_hidden, n_output) def forward(self, x): x = ((x)) return (x) net = Net(1, 20, 1) print(net) optimizer = ((), lr=0.2) loss_func = ()
The framework of the network is built, and then we pass in the number of neurons corresponding to the three layers and then define the optimizer, here I chose Adam and stochastic gradient descent (SGD), because it is an optimized version of the SGD, the effect is better than the SGD in most cases, we have to pass in the parameters of the neural network (parameters) and define the learning rate (learning rate), the learning rate is usually selected less than 1 number, you need to rely on experience and continuous debugging. The learning rate is usually chosen as a number less than 1, which requires experience and continuous debugging. Finally, we choose the mean square error (MSE) method to calculate the loss.
2.3 Training the network
Next we have to train the neural network we built, I trained 2000 rounds (epoch), first update the result prediction then calculate the loss, then clear the gradient, then according to the loss backward propagation (backward), and finally optimize to find the best fit curve.
for t in range(2000): prediction = net(x) loss = loss_func(prediction, y) optimizer.zero_grad() () ()
3. Effectiveness
Use the code in the following drawing to show the effect.
for t in range(2000): prediction = net(x) loss = loss_func(prediction, y) optimizer.zero_grad() () () if t % 5 == 0: () ((), (), s=10) ((), (), 'r-', lw=2) (2, -100, 'Loss=%.4f' % (), fontdict={'size': 10, 'color': 'red'}) (0.1) () ()
The end result:
4. Complete code
import torch import as F import as plt x = ((-5, 5, 100), dim=1) y = (3) + 0.2 * (()) class Net(): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() = (n_feature, n_hidden) = (n_hidden, n_output) def forward(self, x): x = ((x)) return (x) net = Net(1, 20, 1) print(net) optimizer = ((), lr=0.2) loss_func = () () for t in range(2000): prediction = net(x) loss = loss_func(prediction, y) optimizer.zero_grad() () () if t % 5 == 0: () ((), (), s=10) ((), (), 'r-', lw=2) (2, -100, 'Loss=%.4f' % (), fontdict={'size': 10, 'color': 'red'}) (0.1) () ()
This article on the realization of the Pytorch-based neural network Regression is introduced to this article, more related Pytorch Regression content, please search for my previous articles or continue to browse the following related articles I hope you will support me more in the future!