Last Updated on 2021-10-12 by Clay
When I use PyTorch to build a model, I often feel at a loss as to how to add the data to the end of the sequence when processing the data.
The append()
function which is quite handy to use in python list data, but we can use it in torch tensor.
I found a useful method on the Internet. It is use torch.cat()
to add the data in the sequence.
How To Use torch.cat()
The use of torch.cat()
is very simple, see the code below for details.
import torch
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
ab = torch.cat((a, b), 0)
ba = torch.cat((b, a), 0)
print('ab:', ab)
print('ba:', ba)
Output:
ab: tensor([1, 2, 3, 4, 5, 6])
ba: tensor([4, 5, 6, 1, 2, 3])
It can be seen that the specific splicing method is “previous item” followed by “next item”. As for what does 0 mean? The following is an example that makes it easier to see the difference.
import torch
a = torch.tensor([[1, 2, 3]])
b = torch.tensor([[4, 5, 6]])
print('0:', torch.cat((a, b), 0))
print('1:', torch.cat((a, b), 1))
Output:
0: tensor([[1, 2, 3],
[4, 5, 6]])
1: tensor([[1, 2, 3, 4, 5, 6]])
It can be found that 0 and 1 are splicing methods of different dimensions.
References
- https://pytorch.org/docs/master/generated/torch.cat.html
- https://discuss.pytorch.org/t/appending-in-pytorch/39313
Pingback: Pytorch Append To Tensor? Quick Answer - Barkmanoil.com
Pingback: Pytorch Tensor Append? Best 5 Answer - Barkmanoil.com
Output for the first block is not single-dimension, but should be:
ab: tensor([[1, 2, 3],
[4, 5, 6]])
ba: tensor([[4, 5, 6],
[1, 2, 3]])
In my case, the data is one-dimensional, perhaps the code we are testing is different?