[PyTorch] How to Save the Trained Model, and Load the Model
When we using PyTorch, a useful Python deep learning framework for model training, maybe sometimes we forget to “store” the trained model, even unaware of that. (Such as me in the past)
When we using PyTorch, a useful Python deep learning framework for model training, maybe sometimes we forget to “store” the trained model, even unaware of that. (Such as me in the past)
Generative Adversarial Network (GAN) is a famous neural network model, its function is to input a set of noise and then generate a set of fake pictures through the Generator, and then use the Discriminator to distinguish whether it is a real picture.
Read More »[PyTorch] Build a GAN model to generate false MNIST picturesIf we have both the model classification results and correct answers, we can calculate the Binary Cross Entropy, it is a famous loss function.
Read More »[Machine Learning] Introduction to Binary Cross EntropyWhen using sigmoid function in PyTorch as our activation function, for example it is connected to the last layer of the model as the output of binary classification. After all, sigmoid can compress the value between 0-1, we only need to set a threshold, for example 0.5 and you can divide the value into two categories.
Read More »[PyTorch] Set the threshold of Sigmoid output and convert it to binary valueToday if you are preprocessing some machine learning data, maybe you need to convert PyTorch tensor to one-hot encoding type. There is a intuitive method that is convert TENSOR to NUMPY-ARRAY, and then convert NUMPY-ARRAY to one-hot encoding type, just like this article: [Python] Convert the value to one-hot type in Numpy
Read More »[PyTorch] Convert Tensor to One-Hot Encoding TypeNLLLoss is a loss function commonly used in multi-classes classification tasks. Its meaning is to take log the probability value after softmax and add the probability value of the correct answer to the average.
Read More »[Machine Learning] NLLLoss function introduction and program implementationToday I want to record how to use Deep generative Adversarial Network (DCGAN) to implement a simple generate picture model. I wanted to demo with delicious snack pictures, but the effect was not very good, I downloaded half a million snack pictures in vain.
Finally, I used the official demo CelebA dataset.
Read More »[PyTorch] Tutorial(7) Use Deep Generative Adversarial Network (DCGAN) to generate picturesJust as torchvision
is a module in PyTorch that specializes in processing pictures, torchaudio
to be recorded today is a module in PyTorch that specializes in processing audio.
Today we challenged the classifiers of different data sets again. This time, CIFAR-19 is a more difficult problem than MNIST handwriting recognition. In addition to the size of the picture becoming 32×32, CIFAR-10 is no longer a pure grayscale value, but a picture with the three primary colors of RGB.
Read More »[PyTorch] Tutorial(5) How to train a model to classify CIFAR-10 database“Use a toy dataset to train a classification model” is a simplest deep learning practice.
Today I want to record how to use MNIST A HANDWRITTEN DIGIT RECOGNITION dataset to build a simple classifier in PyTorch.
Read More »[PyTorch] Tutorial(4) Train a model to classify MNIST dataset