It is a function used in PyTorch to calculate the pairwise distance between two tensors. It is often used in point cloud processing, graph neural networks, similarity measurement and other scenarios.
Basic syntax
(x1, x2, p=2.0)
Parameter description:
parameter | illustrate |
---|---|
x1 |
One shape is[B, M, D] or[M, D] The tensor of represents a set of points. |
x2 |
One shape is[B, N, D] or[N, D] The tensor of represents another set of points. |
p |
Distance norm, defaultp=2.0 Denotes the Euclidean distance (L2 norm), or can be set to1.0 (Manhattan distance), or other value. |
Output
The output is a tensor with a shape:
- if
= [M, D]
and= [N, D]
, the output shape is[M, N]
; - Each
(i, j)
Position representationx1[i]
andx2[j]
The distance between.
Example
1. Simple 2D Euclidean distance
import torch x1 = ([[0.0, 0.0], [1.0, 0.0]]) # 2 pointsx2 = ([[0.0, 1.0], [1.0, 1.0]]) # 2 pointsdist = (x1, x2, p=2) print(dist)
The output is:
tensor([[1.0000, 1.4142],
[1.4142, 1.0000]])
Right now:
- The distance between x1[0] and x2[0] is 1;
- The distance between x1[0] and x2[1] is sqrt(2), etc.
2. Bulk form (3D Tensor)
x1 = (2, 5, 3) # batch=2, 5 3D points per groupx2 = (2, 4, 3) # batch=2, 4 3D points per groupout = (x1, x2) # The output shape is [2, 5, 4]
3. Use different norms
(x1, x2, p=1) # Manhattan Distance(x1, x2, p=2) # Euclidian distance (default)(x1, x2, p=inf) # Maximum Dimensional Difference
Things to note
-
x1
andx2
The last dimension (feature dimension) of must be the same. -
p=2
The most efficient time, other norms may be slower. - If both tensors are large, this operation can be very memory-consuming.
Examples of application scenarios
- Distance calculation between point clouds (such as ISS, FPFH, ICP)
- The distance graph construction of matching point pairs
- KNN query
- Graph construction (adjacent matrix, compatibility matrix)
It is a function used in PyTorch to sum tensor elements, and its function is similar to that in NumPy
, but you can choose dimensions more flexibly to operate.
Basic usage
(input, dim=None, keepdim=False)
Parameter description:
-
input
: Tensors to be summed; -
dim
(Optional): Specify the dimension in which sum is performed; -
keepdim
(Optional): Boolean value, whether to retain the summed dimension (not retained by default).
Sample explanation
Example 1: Sum all elements
x = ([[1, 2], [3, 4]]) (x) # Output:tensor(10)
Example 2: Specify dimension sum
x = ([[1, 2], [3, 4]]) (x, dim=0) # Sum by column: 1+3, 2+4# Output: tensor([4, 6])(x, dim=1) # Sum by line: 1+2, 3+4# Output:tensor([3, 7])
Example 3: Preserve the dimension
x = ([[1, 2], [3, 4]]) (x, dim=1, keepdim=True) # Output:tensor([[3], [7]])
This is the article about the detailed explanation of the use examples of cdist and sum functions in PyTorch. For more related content on the use of cdist and sum functions, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!