matrix
System: Win 10
Graphics card: gtx965m
cpu :i7-6700HQ
python 3.61
pytorch 0.3
package reference
import torch from import Variable import as F import numpy as np import visdom import time from torch import nn,optim
Data preparation
use_gpu = True ones = ((500,2)) x1 = (6*torch.from_numpy(ones),2) y1 = (500) x2 = (6*torch.from_numpy(ones*[-1,1]),2) y2 = y1 +1 x3 = (-6*torch.from_numpy(ones),2) y3 = y1 +2 x4 = (6*torch.from_numpy(ones*[1,-1]),2) y4 = y1 +3 x = ((x1, x2, x3 ,x4), 0).float() y = ((y1, y2, y3, y4), ).long()
The visualization is looked at below:
visdom visualization preparation
First create the windows to be observed
viz = () colors = (0,255,(4,3)) #Colors are random The #line graph is used to observe the loss and accuracy line = (X=(1,10,1), Y=(1,10,1)) # Scatterplots are used to observe changes in categorization scatter = ( X=x, Y=y+1, opts=dict( markercolor = colors, marksize = 5, legend=["0","1","2","3"]),) The #text window is used to display loss, accuracy, and time. text = ("FOR TEST") # Scatterplot for comparison ( X=x, Y=y+1, opts=dict( markercolor = colors, marksize = 5, legend=["0","1","2","3"] ), )
The effect is as follows:
logistic regression processing
Input 2, Output 4
logstic = ( (2,4) )
gpu or cpu choice:
if use_gpu: gpu_status = .is_available() if gpu_status: logstic = () # net = () print("############### using gpu ##############") else : print("############### using cpu ##############") else: gpu_status = False print("############### using cpu ##############")
Optimizer and loss function:
loss_f = () optimizer_l = ((), lr=0.001)
Trained 2000 times:
start_time = () time_point, loss_point, accuracy_point = [], [], [] for t in range(2000): if gpu_status: train_x = Variable(x).cuda() train_y = Variable(y).cuda() else: train_x = Variable(x) train_y = Variable(y) # out = net(train_x) out_l = logstic(train_x) loss = loss_f(out_l,train_y) optimizer_l.zero_grad() () optimizer_l.step()
Train over into observation and visualization:
if t % 10 == 0: prediction = ((out_l, 1), 1)[1] pred_y = accuracy = sum(pred_y ==train_y.data)/float(2000.0) loss_point.append([0]) accuracy_point.append(accuracy) time_point.append(()-start_time) print("[{}/{}] | accuracy : {:.3f} | loss : {:.3f} | time : {:.2f} ".format(t + 1, 2000, accuracy, [0], () - start_time)) (X=np.column_stack(((time_point),(time_point))), Y=np.column_stack(((loss_point),(accuracy_point))), win=line, opts=dict(legend=["loss", "accuracy"])) #The data here will be wrong if you run it with gpu, you have to replace the data with the cpu data .cpu() can be (X=train_x.cpu().data, Y=pred_y.cpu()+1, win=scatter,name="add", opts=dict(markercolor=colors,legend=["0", "1", "2", "3"])) ("<h3 align='center' style='color:blue'>accuracy : {}</h3><br><h3 align='center' style='color:pink'>" "loss : {:.4f}</h3><br><h3 align ='center' style='color:green'>time : {:.1f}</h3>" .format(accuracy,[0],()-start_time),win =text)
Let's run it once with the cpu and the results are as follows:
Then run it through the gpu with the following results:
Found that the cpu is much faster than the gpu, but I heard that machine learning should be faster gpu ah, baidu, know the answer:
My understanding is that the gpu is far more capable than the cpu in processing images to recognize a large number of matrix operations, etc., and still the cpu is more advantageous in processing some inputs and outputs that are very small.
Add a neural layer:
net = ( (2, 10), (), # activation function (10, 4) )
Add a 10-unit neural layer and see if the effect improves:
Use the cpu:
Use the gpu:
Comparing observations, there doesn't seem to be any difference, it seems that dealing with simple classification problems (few inputs, few outputs), neural layers and gpu don't add up to machine learning.
This is the whole content of this article.