SoFunction
Updated on 2025-05-20

Example of implementation of view function in Pytorch

In PyTorch,viewA function is an extremely important tensor operation function. Its main function is to reshape the shape of the tensor, and at the same time, it will try its best to maintain the total number of elements in the tensor unchanged.

1. Basic functions and syntax

viewThe main function of a function is to change the dimension and size of the tensor, but it is necessary to ensure that the total number of elements of the tensor before and after reshaping is the same. The syntax format is as follows:

(*args)

Here*argsRepresents a new shape, which can be either a tuple or multiple commas separated integers.

2. Core usage scenarios

2.1 Dimensional reduction operation

x = (2, 3, 4)  # At this time, the shape of x is [2, 3, 4]y = (2, 12)         # The shape of y becomes [2, 12]

2.2 Dimensional upgrade operation

x = (6)        The shape of # x is [6]y = (2, 3)          # The shape of y becomes [2, 3]

Special uses of 2.3 -1

When used in shape parameters-1When PyTorch will automatically calculate the-1The corresponding dimension value.

x = (2, 3, 4)  # x is [2, 3, 4]y = (2, -1)         # y's shape is [2, 12]z = (-1, 3, 2)      # z has the shape [4, 3, 2]

3. Memory continuity requirements

viewThe function requires that the input tensor must be memory-continuous. If the tensor is discontinuous in memory, it needs to be called firstcontiguous()function.

x = (2, 3)
y = ()                 # Transpose x, at this time y is no longer continuous in memoryz = ().view(3, 2)  # Call contiguous() first, then use view

4. Differences from reshape functions

  • viewFunction: It must be used when the tensor memory is continuous, but it can ensure that the view of the original tensor is returned, which means that data will not be copied, which can improve memory usage efficiency.
  • reshapeFunction: Whether the tensor memory is continuous or not, it may return to the original tensor's view or copy the data.

5. View mechanism

viewThe function returns a view of the original tensor, not the new tensor. This means that when you modify the view, the original tensor will also change.

x = ([1, 2, 3, 4])
y = (2, 2)
y[0, 0] = 100
print(x[0])  # The output is 100

6. Complex shape transformation example

x = (2, 2, 2, 2)  # x has the shape [2, 2, 2, 2]y = (2, 8)             # The shape of y becomes [2, 8]z = (-1)               # z is a one-dimensional tensor with a shape [16]

7. Things to note

  • useviewWhen a function, the total number of elements in the new shape must be equal to the total number of elements in the original tensor.
  • After transpose, slice and other operations on the tensor, the tensor may no longer be continuous in memory, so you need to call it firstcontiguous()function.
  • AlthoughviewFunctions are morereshapeFunctions must perform well, but they need to be more cautious when using them.

8. Advanced Application

In deep learning models,viewFunctions are often used to shape inputs or outputs, such as when transitioning between fully connected and convolutional layers.

# Simulate a CNN outputx = (16, 3, 28, 28)  # Images with batch size of 16, 3 channels, 28×28y = (16, -1)  # Flatten the feature map into a one-dimensional vector and change the shape to [16, 2352]

Comparison of view and reshape

In PyTorch,viewandreshapeAll are used to change the shape of tensors, but they have key differences in functionality, memory management, and usage scenarios. Here is a detailed comparison:

1. Core function comparison

characteristic view reshape
Memory continuity requirements Must be continuous memory (contiguous) No requirements, automatic processing of non-continuous tensors
Return type Always return the view of the original tensor (no data copied) Possibly return a view or copy (depending on whether the data is required to be copied)
Exception handling If the tensor is discontinuous, throw itRuntimeError Automatic callcontiguous()Avoid errors

2. The impact of memory continuity

Strict requirements for view

x = (2, 3)
y = ()  # Transpose operation makes y discontinuousz = (6)  # Error: RuntimeErrorz = ().view(6)  # Correct: first convert to continuous tensor

Reshape's flexibility

x = (2, 3)
y = ()
z = (6)  # is equivalent to ().view(6), automatically handles continuity

3. The difference between view and copy

View properties of view

x = ([1, 2, 3, 4])
y = (2, 2)
y[0, 0] = 100
print(x[0])  # Output: 100 (the original tensor was modified)

Potential copy of reshape

x = ([1, 2, 3, 4])
y = ().reshape(2, 2)  # Reshape may copy data due to discontinuity due to transpositiony[0, 0] = 100
print(x[0])  # Output: 1 (The original tensor has not been modified, reshape has created a copy)

4. Performance and efficiency

  • view: Zero copy operation, high memory efficiency, suitable for high performance computing.
  • reshape: It may generate data copy, which is more expensive, but the code is more concise.

Recommendation: If you need to ensure performance and tensor continuity, use it firstview;If you are not sure about continuity or pursue simplicity of code, usereshape

5. Use scenarios

Scene Recommended functions reason
Tensors are known to be continuous and require efficient operation view Avoid unnecessary copying
Handle potentially discontinuous tensors reshape Automatically handle continuity and avoid errors
Fixed operations in deep learning models view For example, the tensor flattening of CNN to the fully connected layer
Rapid prototyping or code simplification reshape reducecontiguous()Call

6. Special cases: -1 Automatically infer dimensions

Both support-1As a placeholder, PyTorch will automatically calculate the size of the dimension:

x = (2, 3, 4)
y = (2, -1)  # : [2, 12]
z = (-1, 3, 2)  # : [4, 3, 2]

Summary comparison table

Function view reshape
Memory continuity requirements Must be continuous No requirements
Is it guaranteed to be zero copy yes No (copy possible)
Handling non-continuous tensors Need to be called manuallycontiguous() Automatic processing
Code simplicity Low (continuity needs to be paid attention to) Higher
performance high Medium (maybe there is copy overhead)

Best Practices

  • Priority useview: When tensor continuity is known and performance needs to be ensured (such as model forward propagation).
  • usereshape: Avoid when data processing or when tensor continuity is uncertainRuntimeError
  • Debugging tips: If you encounterviewReport an error, check whether the tensor is continuous (usingtensor.is_contiguous())。

This is the end of this article about the implementation example of the view() function in Pytorch. For more related content of Pytorch view() function, please search for my previous article or continue browsing the related articles below. I hope everyone will support me in the future!