Skip to content

[Machine Learning] NLLLoss function introduction and program implementation

NLLLoss is a loss function commonly used in multi-classes classification tasks. Its meaning is to take log the probability value after softmax and add the probability value of the correct answer to the average.

The kind of loss function is as low as possible. We can also find through actual use. Basically, the higher the probability value is, the more consistent with our standard answer, the lower loss will indeed be.

import torch
import torch.nn as nn


softmax = nn.LogSoftmax(dim=1)
input = torch.tensor([[-0.1, 0.2, -0.3, 0.4],
                      [0.5, -0.6, 0.7, -0.8],
                      [-0.9, 0.1, -0.11, 0.12]])
output = softmax(input)
print(output)



Output:

tensor([[-1.5722, -1.2722, -1.7722, -1.0722],
        [-1.0391, -2.1391, -0.8391, -2.3391],
        [-2.1627, -1.1627, -1.3727, -1.1427]])

In here I set our input randomly, the higher the value, the higher the probability of selecting that parameter. We can see this trend from the output of LogSoftmax below.

Then we assume that the standard answer is (1, 2, 3), that is, the probability distribution we choose are: -1.2722, -0.8391, -1.1427.

Then we calculate the value of NLLLoss, just as I said earlier, remove the minus sign, add up and average.

print((1.2722+0.8391+1.1427)/3)



Output:

1.0846666666666667

Then we use the built-in nn.NLLLoss() in PyTorch to confirm.

nll = nn.NLLLoss()
target = torch.tensor([1, 2, 3])
print(nll(output, target))



Output:

tensor(1.0847)

You can see that the result is consistent. NLLLoss is such a Loss function suitable for multiple classifications.


References


Read More

Leave a ReplyCancel reply

Exit mobile version