Last Updated on 2021-04-24 by Clay
今天再次挑戰了不同的資料集(dataset)的分類器。這次的 CIFAR-10 是一個比起 MNIST 手寫辨識來得更難一些的題目。除了圖片的尺寸變成了 32 x 32 之外,CIFAR-10 也不再是單純的灰階值,而是有著 RGB 三原色的圖片了。
由於任務目標變難,所以這次不再是單純只以『全連接層』來建構模型。這次我練習了使用 Convolution layer 加上 Maxpooling 這樣的經典技術。
我的程式碼有不少是參考自 PyTorch 官方的 Tutorial,尤其是模型層的選擇 —— 因為我只要一更動模型層的架構,效果就會變差哈哈哈哈。想說既然要做成筆記留下來,何苦不用人家效果比較好的參數呢?
那麼,以下便開始這次的筆記吧!
程式碼解釋
# -*- coding: utf-8 -*- import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms
首先,匯入所需要的 PyTorch 套件。
# GPU device = 'cuda:0' if torch.cuda.is_available() else 'cpu' print('GPU state:', device)
確認是否可用 GPU;若否,則使用 CPU。
# Cifar-10 data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
再次使用 torchvision 內的 transforms 來進行圖片的轉換。這真的是個非常好用的功能,我最近也在寫篇文章紀錄各種功能。
# Data trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainLoader = torch.utils.data.DataLoader(trainset, batch_size=8, shuffle=True, num_workers=2) testLoader = torch.utils.data.DataLoader(testset, batch_size=8, shuffle=False, num_workers=2)
由於 PyTorch 的 datasets 裡面便有附 CIFAR-10,於是這裡便可以直接載下來,不用再另外手動設定。
如果沒有 data 資料夾存於當前目錄,會自動創建一個資料夾並將 CIFAR-10 的資料放在裡面。
另外,batch_size 其實可以自己調整,不過我目前嘗試 Accuracy 最高的便是 8 了。
# Data classes classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
CIFAR-10 當中的 10 種分類。
# Model structure class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) self.pool = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(16*5*5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16*5*5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net().to(device) print(net)
Output:
Net(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
模型的架構,這裡便是我不敢隨意更動的部份。有嘗試過,但效果很容易變差、或根本沒差。我需要多點時間測試。
# Parameters criterion = nn.CrossEntropyLoss() lr = 0.001 epochs = 3 optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9)
參數設定。分別是 Loss function (採用多分類器經典的 CrossEntropy) 、Learning Rate、迭代次數、優化器。
# Train for epoch in range(epochs): running_loss = 0.0 for times, data in enumerate(trainLoader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # Zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if times % 100 == 99 or times+1 == len(trainLoader): print('[%d/%d, %d/%d] loss: %.3f' % (epoch+1, epochs, times+1, len(trainLoader), running_loss/2000)) print('Finished Training')
這裡便是訓練的過程了。要注意的是記得 optimizer.zero_grad()
要每次在更新權重前清空梯度,不然梯度會一直累積。
# Test correct = 0 total = 0 with torch.no_grad(): for data in testLoader: inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) outputs = net(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test inputs: %d %%' % (100 * correct / total)) class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testLoader: inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) outputs = net(inputs) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(8): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
Output:
Accuracy of the network on the 10000 test inputs: 55 %
Accuracy of plane : 57 %
Accuracy of car : 72 %
Accuracy of bird : 31 %
Accuracy of cat : 16 %
Accuracy of deer : 53 %
Accuracy of dog : 68 %
Accuracy of frog : 59 %
Accuracy of horse : 65 %
Accuracy of ship : 56 %
Accuracy of truck : 71 %
這裡是測試的部份,可以看到我們的程式真的不是亂猜的。
完整程式碼
# -*- coding: utf-8 -*- import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms # GPU device = 'cuda:0' if torch.cuda.is_available() else 'cpu' print('GPU state:', device) # Cifar-10 data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # Data trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainLoader = torch.utils.data.DataLoader(trainset, batch_size=8, shuffle=True, num_workers=2) testLoader = torch.utils.data.DataLoader(testset, batch_size=8, shuffle=False, num_workers=2) # Data classes classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # Model structure class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) self.pool = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(16*5*5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16*5*5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net().to(device) print(net) # Parameters criterion = nn.CrossEntropyLoss() lr = 0.001 epochs = 3 optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9) # Train for epoch in range(epochs): running_loss = 0.0 for times, data in enumerate(trainLoader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # Zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if times % 100 == 99 or times+1 == len(trainLoader): print('[%d/%d, %d/%d] loss: %.3f' % (epoch+1, epochs, times+1, len(trainLoader), running_loss/2000)) print('Finished Training') # Test correct = 0 total = 0 with torch.no_grad(): for data in testLoader: inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) outputs = net(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test inputs: %d %%' % (100 * correct / total)) class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testLoader: inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) outputs = net(inputs) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(8): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
請問loss的部分為甚麼是running_loss/2000呢?2000是怎麼來的?
Hello,你好,很高興認識你。
慚愧…這好久以前寫的程式,現在猛然一看,突然也忘了當初為什麼要除以 2000。
不過我在想,當初應該只是希望求這 100 步之間平均的 Loss 而已。之所以是 2000,大概是改著改著改漏掉了…
造成困擾真是不好意思。
好的,感謝
您好!抱歉打擾~想請問要怎麼增加accuracy呢?
您好,請問是指想要提升分類模型的效果嗎?
可以去網路上查詢 Cifar-10 任務的最高分,記得是有滿多人分享的,當然模型架構也比較複雜。