SoFunction
Updated on 2024-11-13

Implementation of pyTorch deep learning multilayer perceptual machine

activation function

Portal realized in the first two sections

Analysis of pyTorch deep learning softmax implementation

Analysis of pyTorch Deep Learning Gradient and Linear Regression Implementations

The linear model and softmax model implemented in the first two sections are single-layer neural networks that contain only one input layer and one output layer, since the input layer does not transform the data, so only one output layer is counted.

The mutilayer preceptor (mutilayer preceptron) adds hidden layers to deepen the layers of the neural network, because the result of cascading linear layers is still a linear layer, so it is necessary to add activation functions after each hidden layer, i.e., to increase the model's nonlinear capability, making the model's FUNCTION SET larger.

ReLU, Sigmoid, tanh are three common activation functions and their function images as well as derivative images are made respectively.

#Drawing use
def xyplot(x,y,name,size):
	(figsize=size)
	(().numpy(),().numpy())
	('x')
	(name+'(x)')
	()
#relu
x = (-8,8,0.01,requires_grad=True)
y = ()
xyplot(x,y,'relu')

在这里插入图片描述

().backward()
xyplot(x,,'grad of relu')

在这里插入图片描述

The images of the other two activation functions are drawn similarly as (), ()

PyTorch implementation of a multilayer perceptual machine

In fact, a multilayer perceptron is nothing more than adding a relu operation after the linear transform and performing a softmax operation on the output layer

def relu(x):
	return (input=x,others,other=(0.0))

In addition to returning the maximum value in the tensor, the max method also serves the same purpose as the maximum function, which element-wise compares the input to the other and returns the maximum value of the two, with the shape unchanged.

class MulPeceptron():
    def __init__(self,in_features,out_features):
        super().__init__()
         = (in_features=in_features,out_features=256)
         = (in_features=256,out_features=out_features)
    def forward(self,t):
        t = (start_dim=1)
        t = (t)
        t = (t)
        t = (t)
        return t

Instead of implementing it from scratch, since softmax and linear model are handwritten, this one just adds a matrix multiplication and a ReLU operation

Above is the detailed content of pytorch deep learning multilayer perceptual machine implementation, more information about pytorch implementation of multilayer perceptual machine please pay attention to my other related articles!