Skip to content

[Linux] Install CUDA and CuDNN on Ubuntu 18.04

If we want to use GPU for deep learning (for example, through Tensorflow or Keras), the installation environment is not really complicated, and it usually only takes a few hours to complete the environment in the first time.

There are only 4 main steps:

  1. If your gcc version is 7.x, we need to switch to gcc 6.x
  2. Download and install the GPU driver
  3. Download and install CUDA (the following example is 10.0)
  4. Download and install CuDNN (the following example is 7.6.4)

Before we start, I need to remind you: You may encounter different problems during the installation process. But don’t give up at this time. Search your error message on Internet, and you will usually find some solution.

Below, I began to record my installation steps.


(Optional) Install GCC

The reason why it is recommended to install the gcc 6.x version instead of the default gcc 7.x of Ubuntu 18.04 is because gcc 6.x is relatively smooth when compiling some CUDA packages later, I have encountered gcc 7.x error problem.

For the installation of gcc 6.x, I refer to here: https://gist.github.com/zuyu/7d5682a5c75282c596449758d21db5ed

sudo apt-get update && \
sudo apt-get install build-essential software-properties-common -y && \
sudo add-apt-repository ppa:ubuntu-toolchain-r/test -y && \
sudo apt-get update && \
sudo apt-get install gcc-6 g++-6 -y && \
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-6 60 --slave /usr/bin/g++ g++ /usr/bin/g++-6 && \
gcc -v 

Just copy and paste it in terminal, as long as the output is 6.x, you’re done.


Download and Install GPU Driver

You need to go to the NVIDIA official website to download your gpu driver: https://www.nvidia.com.tw/Download/index.aspx?lang=tw

Select the GPU version you need and download. For example, I have a 2080Ti GPU in my computer, and I am a 64-bit Ubuntu, so my choice is as shown in the picture above.

After downloading, go to the download path and use the following command to install:

sudo sh NVIDIA-Linux-x86_64-430.50.run

Maybe you will see a message is:

The distribution-provided pre-install script failed! Are you sure you want to continue?

I encountered this message the first time (and every time afterwards) I installed the driver. I’m not sure why there is no such message, but I have seen a discussion on the forum saying “This is a joke made by Nvidia engineers! Test your determination to install!

So, continue install.

Until the kernel is installed, we restart the computer.

sudo reboot

After rebooting, use the following command to confirm that the driver is installed successfully.

nvidia-smi

Download and Install CUDA 10.0

10.0 is recommended because my program is relatively stable in CUDA 10.0. Of course you can choose the version you want.

https://developer.nvidia.com/cuda-10.0-download-archive

Choose the version you want to download.

After downloading, go to the downloaded path and install:

sudo sh cuda_10.0.130_410.48_linux.run

During the installation process, because the GPU driver has been installed in advance, here, we don’t need to install the GPU driver anymore.


Download and Install CuDNN 7.6.4

Downloading CuDNN is troublesome, we need to register as a developer member.

https://developer.nvidia.com/rdp/form/cudnn-download-survey

Here choose your own operating system, and then choose the version suitable for CUDA 10.0 (if you really installed CUDA 10.0 as I did before).

After the download is complete, if it is a .deb file, after we come to the download path, we need to install it with the following command.

sudo dpkg -i libcudnn7_7.6.4.38+cuda10.0_amd64.deb

After the installation is complete, reboot again.


TEST

I don’t know if I can train the model, how can I feel at ease? Here is a simple sample code to test whether the GPU is useful.

First we may need to install the following packages:

sudo pip3 install tensorflow-gpu
sudo pip3 install keras
sudo pip3 install matplotlib
sudo pip3 install pandas

After installation, you should be able to execute the following code:

# coding: utf-8
import os
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
from keras.utils import np_utils, plot_model
from keras.datasets import mnist
import matplotlib.pyplot as plt
import pandas as pd

# Mnist Dataset
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()
x_train = X_train.reshape(60000, 1, 28, 28)/255
x_test = X_test.reshape(10000, 1, 28, 28)/255
y_train = np_utils.to_categorical(Y_train)
y_test = np_utils.to_categorical(Y_test)

# Model Structure
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=3, input_shape=(1, 28, 28), activation='relu', padding='same'))
model.add(MaxPool2D(pool_size=2, data_format='channels_first'))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(10, activation='softmax'))
print(model.summary())

# Train
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, batch_size=64, verbose=1)

# Test
loss, accuracy = model.evaluate(x_test, y_test)
print('Test:')
print('Loss: %s\nAccuracy: %s' % (loss, accuracy))

# Save model
model.save('./CNN_Mnist.h5')

# Load Model
model = load_model('./CNN_Mnist.h5')

# Display
def plot_img(n):
    plt.imshow(X_test[n], cmap='gray')
    plt.show()

def all_img_predict(model):
    print(model.summary())
    loss, accuracy = model.evaluate(x_test, y_test)
    print('Loss:', loss)
    print('Accuracy:', accuracy)
    predict = model.predict_classes(x_test)
    print(pd.crosstab(Y_test.reshape(-1), predict, rownames=['Label'], colnames=['predict']))

def one_img_predict(model, n):
    predict = model.predict_classes(x_test)
    print('Prediction:', predict[n])
    print('Answer:', Y_test[n])
    plot_img(n)



Remember to open another terminal to see if you have a GPU:

watch -n 1 nvidia-smi

This screen will refresh the nvidia-smi interface every second. Basically, the above MNIST applet will consume about 3%-10% of the GPU.

I hope everyone can install it smoothly!

Leave a Reply