Skip to content

[Linux] Using Command Line Tools To Test Hard Disk Read/Write Speeds

Last Updated on 2024-08-08 by Clay

When we purchase a new hard drive or evaluate whether to use one, the read/write speed of the hardware is definitely our main concern, as it directly affects our usage experience with the drive.

For example, I wanted to install another operating system on an external hard drive, so I wanted to assess its performance first before proceeding. To evaluate the drive, we can use commands like dd, hdparm, and fio.

If you don’t want to install additional tools, you can directly use the system’s native dd command; if you are willing to install a new tool but prefer not to configure too much, hdparm is the best choice; however, if you don’t mind installing new tools or configuring settings and want the most detailed results, fio is more suitable for you.

First, let’s confirm the hard drive using lsblk. Personally, I recommend hiding loop devices for better readability.

lsblk | grep -v loop


Output:

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part /media/clay/CLAY_DEVICE
nvme0n1 259:0 0 953.9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
└─nvme0n1p2 259:2 0 953.4G 0 part /var/snap/firefox/common/host-hunspell
/


sda is my external HDD, which is automatically mounted at /media/clay/CLAY_DEVICE, and nvme0n1 is the built-in SSD on my laptop.


dd Command

We can use the dd command to write a 1GB file to the hard drive we want to test, and print out the time taken.

sync; dd if=/dev/zero of=/media/clay/CLAY_DEVICE/testfile bs=1G count=1 oflag=direct; sync


Output:

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 25.1677 s, 42.7 MB/s


Conversely, we can also test the read speed by reading 1GB of data from the testfile on the hard drive.

sync; dd if=/media/clay/CLAY_DEVICE/testfile of=/dev/null bs=1G count=1 iflag=direct; sync


Output:

1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 24.4449 s, 43.9 MB/s


This method is simple and straightforward and does not require installing additional tools. However, be careful not to write files to the wrong place, as it could be catastrophic if you overwrite important data.


hdparm Command

First, we can install this tool:

sudo apt install hdparm


Then, simply run:

sudo hdparm -tT /dev/sda


Output:

/dev/sda:
Timing cached reads: 35514 MB in 2.00 seconds = 17784.75 MB/sec
Timing buffered disk reads: 124 MB in 3.00 seconds = 41.32 MB/sec


The -T option tests the speed of reading from the cache, while the -t option tests the speed of reading directly from the disk. Therefore, both results are shown.


fio Command

First, let’s install the fio tool:

sudo apt install fio


Next, we need to create a configuration file:

[global]
size=1G
direct=1
ioengine=libaio
bs=4k
numjobs=1
runtime=60
group_reporting

[readtest]
rw=read
filename=/media/clay/CLAY_DEVICE/readtestfile

[writetest]
rw=write
filename=/media/clay/CLAY_DEVICE/writetestfile


Then you can run the following command to perform the fio analysis.

fio fiotest.fio


Output:

readtest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
writetest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.28
Starting 2 processes
readtest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 2 (f=0): [f(2)][100.0%][r=984KiB/s,w=984KiB/s][r=246,w=246 IOPS][eta 00m:00s]
readtest: (groupid=0, jobs=2): err= 0: pid=2945986: Thu Aug 8 16:05:06 2024
read: IOPS=1203, BW=4815KiB/s (4931kB/s)(282MiB/60002msec)
slat (usec): min=3, max=218, avg=10.08, stdev= 8.82
clat (usec): min=209, max=87275, avg=819.46, stdev=1644.63
lat (usec): min=248, max=87283, avg=829.62, stdev=1645.49
clat percentiles (usec):
| 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 347],
| 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 396],
| 70.00th=[ 445], 80.00th=[ 482], 90.00th=[ 3687], 95.00th=[ 3916],
| 99.00th=[ 4228], 99.50th=[ 4490], 99.90th=[11469], 99.95th=[17433],
| 99.99th=[55837]
bw ( KiB/s): min= 856, max= 9947, per=100.00%, avg=4848.71, stdev=3366.90, samples=119
iops : min= 214, max= 2486, avg=1212.14, stdev=841.71, samples=119
write: IOPS=1248, BW=4996KiB/s (5116kB/s)(293MiB/60001msec); 0 zone resets
slat (usec): min=242, max=82627, avg=798.46, stdev=1525.30
clat (nsec): min=166, max=41292, avg=975.41, stdev=1372.50
lat (usec): min=242, max=82638, avg=799.75, stdev=1525.55
clat percentiles (nsec):
| 1.00th=[ 193], 5.00th=[ 213], 10.00th=[ 219], 20.00th=[ 249],
| 30.00th=[ 258], 40.00th=[ 266], 50.00th=[ 294], 60.00th=[ 426],
| 70.00th=[ 652], 80.00th=[ 2040], 90.00th=[ 2672], 95.00th=[ 4384],
| 99.00th=[ 4960], 99.50th=[ 5344], 99.90th=[14272], 99.95th=[15168],
| 99.99th=[23680]
bw ( KiB/s): min= 863, max=10288, per=100.00%, avg=5030.72, stdev=3491.87, samples=119
iops : min= 215, max= 2572, avg=1257.65, stdev=872.96, samples=119
lat (nsec) : 250=10.39%, 500=21.31%, 750=4.99%, 1000=1.54%
lat (usec) : 2=2.41%, 4=7.23%, 10=2.97%, 20=0.07%, 50=0.01%
lat (usec) : 250=0.01%, 500=41.67%, 750=1.86%, 1000=0.08%
lat (msec) : 2=0.02%, 4=4.03%, 10=1.34%, 20=0.05%, 50=0.01%
lat (msec) : 100=0.01%
cpu : usr=0.25%, sys=2.13%, ctx=147417, majf=0, minf=28
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=72234,74941,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
READ: bw=4815KiB/s (4931kB/s), 4815KiB/s-4815KiB/s (4931kB/s-4931kB/s), io=282MiB (296MB), run=60002-60002msec
WRITE: bw=4996KiB/s (5116kB/s), 4996KiB/s-4996KiB/s (5116kB/s-5116kB/s), io=293MiB (307MB), run=60001-60001msec

Disk stats (read/write):
sda: ios=72177/74894, merge=0/1, ticks=58993/58467, in_queue=117458, util=99.88%


Here is a brief explanation of the fio configuration file:

[global] Section

This section defines the global settings shared by all tests:

  1. size=1G
    This parameter specifies that each job will read/write a total of 1GB of data
  2. direct=1
    Setting this to 1 means performing direct I/O operations, bypassing the operating system’s cache
  3. ioengine=libaio
    Specifies the I/O engine to be libaio, which is the Linux asynchronous I/O engine. This is one of the many I/O engines supported by fio and is suitable for efficient asynchronous I/O operations
  4. bs=4k
    Specifies the block size for I/O operations to be 4KB, which is the data chunk size for each read/write operation
  5. numjobs=1
    Specifies the number of jobs to be started, which is 1 in this case. This means there will be only one concurrent operation in this test
  6. runtime=60
    Specifies the duration of the test to be 60 seconds. The test will run for this duration regardless of whether the specified data size has been reached
  7. group_reporting
    Enables group reporting mode, where results from all jobs are aggregated into a single report instead of being reported separately

[readtest] Section

This section defines the settings specifically for the read test:

  1. rw=read
    Specifies the I/O operation to be read. This means this job will only perform read operations
  2. filename=/media/clay/CLAY_DEVICE/readtestfile
    Specifies the file to be read as /media/clay/CLAY_DEVICE/readtestfile

[writetest] Section

This section defines the settings specifically for the write test:

  1. rw=write
    Specifies the I/O operation to be write. This means this job will only perform write operations
  2. filename=/media/clay/CLAY_DEVICE/writetestfile
    Specifies the file to be written as /media/clay/CLAY_DEVICE/writetestfile

Conclusion

After analyzing my HDD with the above three tools, I found that it indeed lags behind SSDs significantly. In the end, I decided against using the HDD as an emergency backup operating system.


References


Read More

Tags:

Leave a Reply