Skip to content

[PyTorch] Tutorial(1) What is Tensor?

I will introduce PyTorch which is a very famous Machine Learning package, and note my experience of my study.

If I have no note, I’ll forget these hahaha.

I call PyTorch is a package is not accurate, maybe call it is a framework of deep learning based on Python better.

As far as I know, this tool was first developed by Facebook based on Torch, a machine learning framework written by Lua.

It has gradually become convenient, easy to learn, and even blessed by many experts.

More and more kits are being used.

Development, In short, I finally start to study PyTorch by my professor’s (forced).

I don’t know I will write a tutorial about installing Cuda, Cudnn based on PyTorch. But by the way, I also follow the tutorial from the expert on the internet, so, if everybody install the dependency package follow the tutorial from the internet to install your environment, I think it’s great!

Of course, if you just only to use CPU is very easy. Windows is a little troublesome.

And then, we start.


Tensor

PyTorch can customize our model layer free than Keras (The famous package to create a machine learning model.) But PyTorch is easy to understand better than the backup Tensorflow in Keras.

Tensor, is a kind of data type we can set on the PyTorch.

Tensor can connect by any way, it can convert the Numpy to Tensor, and vice versa. More importantly, it supports CUDA’s GPU acceleration, allowing us to use the GPU for deep learning <- At least this is the main purpose of my learning PyTorch.

from __future__ import print_function
import torch


First step, we have to import package “torch”. (of course we have to pip3 install torch). If you following the official tutorial is best.

And we get our first simple code:

x = torch.empty(5, 3)
print(x)


This is the most basic, an uninitialized 5×3 matrix:

tensor([[2.6165e+11, 4.5695e-41, 2.6165e+11],
        [4.5695e-41, 2.0283e-19, 2.8825e+32],
        [2.7262e+20, 1.2119e+25, 2.0283e-19],
        [2.7909e+23, 1.5986e+34, 1.1626e+27],
        [8.9666e-33, 1.3563e-19, 1.8578e-01]])

We can also assign values to the matrix as if we were giving random value to the matrix as if we were giving random value, we only need to set the dimensions of the matrix in advance:

x = torch.rand(5, 3)
print(x)


Output:

tensor([[0.4819, 0.2061, 0.8834],
        [0.7812, 0.4246, 0.5375],
        [0.8140, 0.2164, 0.5493],
        [0.2412, 0.3382, 0.7903],
        [0.3368, 0.5848, 0.8287]])

Is it convenient?

Let’s look at an example of giving an initialization matrix, this time giving all matrix values 0.

x = torch.zeros(5, 3)
print(x)


We set the matrix dimension first, again. Then we execute the program.

Output:

tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])

In addition to the above method of initializing the matrix, in fact, we can also directly set the matrix value:

x = torch.tensor([[1, 1], [2, 2]])
print(x)

To set up two-dimensional matrix to see if PyTorch really prints as we hope?

tensor([[1, 1],
[2, 2]])

Matrix Operations

Here’s a quick introduction to Tensor’s data format. It is very simple and intuitive to operate in PyTorch:

x = torch.rand(2, 2)
y = torch.rand(2, 2)
print('x:', x)
print('y:', y)
print('x+y:', x+y)
print('x-y:', x-y)


Output:

x: tensor([[0.0138, 0.7394],
           [0.0525, 0.8128]])
y: tensor([[0.3438, 0.9111],
           [0.9264, 0.3652]])
x+y: tensor([[0.3576, 1.6505],
             [0.9789, 1.1780]])
x-y: tensor([[-0.3300, -0.1718],
             [-0.8739,  0.4476]])

Is it really intuitive?


Numpy

PyTorch’s Tensor is very convenient and easy to use, I don’t think I have to mention it.

So the most prestigious matrix processing data package in Python, I am afraid that non-Numpy is the only one.

Below I will introduce how to convert PyTorch’s Tensor and Numpy.

Convert from Tensor to Numpy and from Numpy to Tensor:

import numpy as np

# Tensor convert to Numpy
x = torch.ones(5)
y = x.numpy()
print(x)
print(y)

# Numpy convert to Tensor
a = np.ones(5)
b = torch.from_numpy(a)
print(a)
print(b)


Output:

tensor([1., 1., 1., 1., 1.])
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
tensor([1., 1., 1., 1., 1.], dtype=torch.float64)

The formats are:

  • Tensor
  • Numpy
  • Numpy
  • Tensor

CUDA

The most exciting moment! How to use PyTorch for deep learning?

First you have to remember such an instruction fitst:

torch.cuda.is_available()


The command will confirm if there is any GPU on your computer that you can use for deep learning.

In order for our program not to report errors, we can execute the if conditional judgment above the program horse that is going to perform deep learning to confirm whether our computer really has enough GPUs to call:

if torch.cuda.is_available():
    device = torch.device('cuda')
    x = torch.rand(2, 2)
    y = torch.ones_like(x, device=device)
    x = x.to(device)
    z = x + y
    print(z)
    print(z.to('cpu', torch.double))


Output:

tensor([[1.2155, 1.5125],
        [1.7806, 1.4735]], device='cuda:0')
tensor([[1.2155, 1.5125],
        [1.7806, 1.4735]], dtype=torch.float64)

The above program simply prints the z generated by CUDA, and then we use the instructions to switch back to the CPU to store the z.

We can see that the devices behind the two sides are not the same, which means that we have the useful GPU acceleration in front of us to generate a data, and then we will restore it back!

If you want to know more about using PyTorch with GPU and not using it, maybe you can refer to this website: https://pytorch.org/docs/stable/tensors.html

There are a few different instructions listed in detail, it’s convenient to check this out!


The above is my current record of PyTorch’s notes.

I hope to make progress as soon as possible (depending on how much the boss is forces me. QAQ)


References


Read More

Leave a Reply