SoFunction
Updated on 2024-11-10

PyTorch Defining Tensor and Indexing and Slicing (Latest Recommendations)

Deep Learning - PyTorch Defines Tensor

I. Creating a Tensor

1.1 Uninitialized methods

These methods just open up the space, the attached initial values (very large, very small, 0), and later we need to make the data deposit.

(): returns an uninitialized Tensor, which is of type FloatTensor by default.

The #(d1,d2,d3) function inputs a shape
(2,3,5)
 
#tensor([[[-1.9036e-22,  6.8944e-43,  0.0000e+00,  0.0000e+00, -1.0922e-20],
#         [ 6.8944e-43, -2.8812e-24,  6.8944e-43, -5.9272e-21,  6.8944e-43],
#         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]],
#
#        [[ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00],
#         [ 0.0000e+00,  0.0000e+00,  1.4013e-45,  0.0000e+00,  0.0000e+00],
#         [ 0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00,  0.0000e+00]]])

(): returns the uninitialized FloatTensor.

#(d1,d2,d3)
(2,2)
 
#tensor([[-0.0000e+00,  4.5907e-41],
#        [-7.3327e-21,  6.8944e-43]])

(): returns the uninitialized IntTensor.

#(d1,d2,d3)
(2,2)
 
#tensor([[          0,  1002524760],
#        [-1687359808,         492]], dtype=torch.int32)

1.2 Random initialization

  • Random uniform distribution: rand/rand_like,randint

rand:[0,1) uniform distribution; randint(min,max,[d1,d2,d3]) return [min,max) integer uniform distribution

#(d1,d2,d3)
(2,2)
 
#tensor([[0.8670, 0.6158],
#        [0.0895, 0.2391]])
 
#rand_like()
a=(3,2)
torch.rand_like(a)
 
#tensor([[0.2846, 0.3605],
#        [0.3359, 0.2789],
#        [0.5637, 0.6276]])
 
#randint(min,max,[d1,d2,d3])
(1,10,[3,3,3])
 
#tensor([[[3, 3, 8],
#         [2, 7, 7],
#         [6, 5, 9]],
#
#        [[7, 9, 9],
#         [6, 3, 9],
#         [1, 5, 6]],
#
#        [[5, 4, 8],
#         [7, 1, 2],
#         [3, 4, 4]]])
  • Random normal distribution randn

randn returns a set of random data conforming to N(0,1) normal distribution

#randn(d1,d2,d3)
(2,2)
 
#tensor([[ 0.3729,  0.0548],
#        [-1.9443,  1.2485]])
 
#normal(mean,std) need to give mean and variance
(mean=([10],0.),std=(1,0,-0.1))
 
#tensor([-0.8547,  0.1985,  0.1879,  0.7315, -0.3785, -0.3445,  0.7092,  0.0525, 0.2669,  0.0744])
#The back end needs to be replaced withreshapeFix it to the shape you want

1.3 Assignment initialization

full: returns a fixed value

#full([d1,d2,d3],num)
([2,2],6)
 
#tensor([[6, 6],
#        [6, 6]])
 
([],6)
#tensor(6) scalar
 
([1],6)
#tensor([6]) vectors

array: return a set of ladder, equal difference series

#(min,max,step): return a collective array of [min,max), step, default is 1.
(0,10)
 
#tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 
(0,10,2)
#tensor([0, 2, 4, 6, 8])

linspace/logspace:return a set of steps

#(min,max,steps): return an array of [min,max], number of steps.
(1,10,11)
 
#tensor([ 1.0000,  1.9000,  2.8000,  3.7000,  4.6000,  5.5000,  6.4000,  7.3000,
#         8.2000,  9.1000, 10.0000])
 
#(a,b,steps):return an array of [10^a,10^b], number of steps
(0,1,10)
 
#tensor([ 1.0000,  1.2915,  1.6681,  2.1544,  2.7826,  3.5938,  4.6416,  5.9948,
#         7.7426, 10.0000])

ones/zeros/eye: return all 1's and 0's or diagonal array ones_like/zeros_like

#(d1,d2)
(2,2)
 
#tensor([[1., 1.],
#        [1., 1.]])
 
#(d1,d2)
(2,2)
 
#tensor([[0., 0.],
#        [0., 0.]])
 
#() can only take one or two arguments
(3)
 
#tensor([[1., 0., 0.],
#        [0., 1., 0.],
#        [0., 0., 1.]])
 
(2,3)
 
#tensor([[1., 0., 0.],
#        [0., 1., 0.]])

1.4 Randomly dispersed variables

randperm: generally used for positional operations. Similar to ().

(8)
#tensor([2, 6, 7, 5, 3, 4, 1, 0])

II. Indexing and slicing

Simple indexing method

a=(4,3,28,28)
a[0].shape
#([3, 28, 28])
a[0,0,0,0]
#tensor(0.9373)

Batch indexing method Start position: End position Left side gets it, right side doesn't Sort of a slice [0,1,2]->[-3,-2,-1]

a[:2].shape
#([2, 3, 28, 28])
a[1:].shape
#([3, 3, 28, 28])

Interval sampling mode Start position:End position:Interval

a[:,:,0:28:2,:].shape
#([4, 3, 14, 28])

Arbitrary sampling method a.index_select(d,[index of data in layer d])

a.index_select(0,([0,2])).shape
#([2, 3, 28, 28])
 
a.index_select(1,([0,2])).shape
#([4, 2, 28, 28])

... Arbitrary dimension sampling

a[...].shape
#([4, 3, 28, 28])
 
a[0,...].shape
#([3, 28, 28])
 
a[:,2,...].shape
#([4, 28, 28])

Mask index mask (0.5) indicates 1 for greater than or equal to 0.5 and 0 for less than 0.5

#torch.masked_select Fetch the value of the corresponding position of the mask
x=(3,4)
mask=(0.5)
torch.masked_select(x,mask)
 
#tensor([1.6950, 1.2207, 0.6035])

The specific index take(variable, position) will turn the variable into a one-dimensional

x=(3,4)
(x,([0,1,5]))
 
#tensor([-2.2092, -0.2652,  0.4848])

To this point this article on PyTorch definition of Tensor and indexing and slicing of the article is introduced to this, more related to PyTorch Tensor indexing and slicing content, please search for my previous articles or continue to browse the following related articles I hope you will support me more in the future!