-
- #1
System check and harddisk/raid speed tests in a nutshell
A collection of useful commands for inventory and speed tests
Part I: Inventory
Oftenly people complain about low transfer speeds to their OMV box. This is not always a problem of network drivers, cabling or switching hardware, protocols or client OS versions. If the internal transfer speed of your box is slow then network transfer speed cannot be fast.
So I thought it would be a good idea to collect system check and testing commands to check what is going on inside your box as a point to start with if you are experiencing slow transfer speeds. This can help the users to gain information about her system and delivers information for the supporters.
There’s a lot of information stored in this forum but they are widespread and sometimes hard to find.Ok, let’s start with a system inventory. I’ve tested all these commands on my home box with an Asus ITX board, look at my footer for more details.
If you know what the OMV system (Or to be more precise: The Linux kernel) detects in your system you can compare that to what your hardware should be capable of and find differences. And you can make sure that the correct kernel driver is loaded for the appropriate hardware.1. Hardware oversight
For this we need the lspci command, a part of the pciutils suite. It is not installed by default, you can install it with apt-get install pciutils. The command without parameters deliver a first oversight, to go into detail we use one of the verbose levels. Because it delivers a lot of information you cannot see completely on the screen it is a good idea to send the output into a file with lspci -v > lspciout.txt and so you have all the information in one file.
The first one lspci -v of the verbose levels delivers much more details (To go deeper you can use -vv or -vvv). Of interest are the informations about the harddisk controllers:00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] (rev 40 ) (prog-if 01 [AHCI 1.0]) Subsystem: ASUSTeK Computer Inc. Device 8496 Flags: bus master, 66MHz, medium devsel, latency 32, IRQ 19 I/O ports at f190 [size=8] I/O ports at f180 [size=4] I/O ports at f170 [size=8] I/O ports at f160 [size=4] I/O ports at f150 [size=16] Memory at feb4b000 (32-bit, non-prefetchable) [size=1K] Capabilities: [70] SATA HBA v1.0 Capabilities: [a4] PCI Advanced Features Kernel driver in use: ahci
Alles anzeigen
Ok, this board has five SATA ports, they are completely detected and running in AHCI mode and the ahci kernel driver is loaded.
For compatibility purposes the PATA drivers are loaded as well:
00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 IDE Controller (rev 40) (prog-if 8a [Master SecP PriP]) Subsystem: ASUSTeK Computer Inc. Device 8496 Flags: bus master, 66MHz, medium devsel, latency 32, IRQ 17 I/O ports at 01f0 [size=8] I/O ports at 03f4 [size=1] I/O ports at 0170 [size=8] I/O ports at 0374 [size=1] I/O ports at f100 [size=16] Kernel driver in use: pata_atiixp
If your board has SATA ports and is AHCI-capable but the ahci kernel driver is not loaded than the onboard controller was probably not detected correctly due to sometimes exotic hardware. Try to install the backport kernels.
2. Detect the SATA levels
Allright, let’s do the next check: Detect the SATA level for every single harddisk and it’s port. You need the «system names» like sda, sdb and so on to do this, they can be found in the web-gui under storage/physical disks.
We use the command hdparm -I /dev/sda | grep -i speed and get that:root@DATENKNECHT:~# hdparm -I /dev/sdc | grep -i speed * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Gen3 signaling speed (6.0Gb/s)
The output shows that this harddisk is capable of SATA-III from the board port up to the disk.
The same for sdb in my system shows that:
root@DATENKNECHT:~# hdparm -I /dev/sdb | grep -i speed * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s)
Aha — a difference, the maximum is only SATA-II even it is connected to an eSATA port which is SATA-III capable as well. But the disk (2,5″ notebook disk for the OS) is only SATA-II capable so this is correct.
If your system can deliver Gen3 signaling (Aka SATA-III) and your harddisks are SATA-III ones but the output does not spit out Gen3 signaling, check if your controller is detected correctly and check your SATA cabling. Old SATA cables are sometimes not SATA-III ready or they become scrappy over the years.3. Harddisk details and SATA connections
Next we check if the harddisks are detected correctly with egrep 'ata[0-9].|SATA link up' /var/log/dmesg:
root@DATENKNECHT:~# egrep 'ata[0-6].|SATA link up' /var/log/dmesg [ 2.507922] ata1.00: ATA-8: HGST HDS724040ALE640, MJAOA580, max UDMA/133 [ 2.507930] ata1.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 2.547057] ata1.01: ATA-8: WDC WD1600BEVT-75ZCT2, 11.01A11, max UDMA/133 [ 2.547069] ata1.01: 312581808 sectors, multi 16: LBA48 NCQ (depth 0/32) [ 2.547081] ata1.00: limited to UDMA/33 due to 40-wire cable [ 2.547086] ata1.01: limited to UDMA/33 due to 40-wire cable [ 2.561231] ata1.00: configured for UDMA/33 [ 2.577216] ata1.01: configured for UDMA/33 [ 2.848317] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 2.848365] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 2.848400] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 2.848425] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 2.850552] ata3.00: ATA-8: HGST HDS724040ALE640, MJAOA580, max UDMA/133 [ 2.850560] ata3.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 2.850601] ata6.00: ATA-8: HGST HDS724040ALE640, MJAOA580, max UDMA/133 [ 2.850608] ata6.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 2.850640] ata5.00: ATA-8: HGST HDS724040ALE640, MJAOA580, max UDMA/133 [ 2.850645] ata5.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 2.850661] ata4.00: ATA-8: HGST HDS724040ALE640, MJAOA580, max UDMA/133 [ 2.850666] ata4.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 2.852309] ata3.00: configured for UDMA/133 [ 2.852347] ata5.00: configured for UDMA/133 [ 2.852377] ata6.00: configured for UDMA/133 [ 2.852512] ata4.00: configured for UDMA/133
Alles anzeigen
Oh, that showed a possible cabling problem in my system. Ata1.00 is connected to the same SATA ports on the board like ata3.00 to ata6.00, but the output says that it is limited to UDMA/33 due to a bad cable. In fact I use a different cable for this disk because it is mounted in a 5,25″ bay on top of the front and the brand new yellow SATA-III cables I used for the other data disks are too short, so I used an old cable laying around. Hm.
But Ata1.01 (The OS disk connected to an eSATA port) shows the same and due to the fact that these two disks are obviously connected to the same port group on the board (both ata1. and there was no ata2. found) I came to the conclusion that the board maker has used a port multiplier with different capabilities as the other four SATA ports and they seem to run on PATA-mode (A 40-wire cable is an old style flat cable). Another difference shows that: The two disks (ata1.00 and ata1.01) do not use the AA mode.
This can be a problem if the speed differs a lot from the other disks, but in my case it doesn’t. This disk delivers the same transfer speeds as the other ones.
Also important is to look if all the disks are using NCQ (Native command queuing) — if one does not, all the other disks would slow down to the speed of that one. AA is the SATA auto activate mode. I haven’t encountered any problems yet, but found some postings on the web reading that missing AA mode can lead to failures in sleep mode (Actually I do not use sleep mode).
That’s a first oversight of the system. -
- #2
System check and harddisk/raid speed tests in a nutshell
A collection of useful commands for inventory and speed tests
Part II: Block devices and raid check
1. Block devices
For checking the block device id’s use blkid:
root@pronas5:~# blkid /dev/sda1: UUID="feef53ee-3b49-40be-ad45-c984f6719781" TYPE="ext4" /dev/sda5: UUID="c10ed016-5195-4f87-8cc2-1e9f050c8734" TYPE="swap" /dev/sdb1: LABEL="allshare" UUID="3c2cc940-7223-4481-870b-ffc331e08e8a" TYPE="ext4" /dev/sdc: UUID="7b4545ec-1f56-9aae-f506-a4f44ee9b4fe" UUID_SUB="9bbcd7ba-67ea-82bf-e3c3-a30279755fb5" LABEL="pronas5:hdraid" TYPE="linux_raid_member" /dev/sdd: UUID="7b4545ec-1f56-9aae-f506-a4f44ee9b4fe" UUID_SUB="c7b00968-14ae-dce9-9812-fc5630e1a718" LABEL="pronas5:hdraid" TYPE="linux_raid_member" /dev/sde: UUID="7b4545ec-1f56-9aae-f506-a4f44ee9b4fe" UUID_SUB="9ae2c5b0-e4f5-67ec-3168-08d38fb133b4" LABEL="pronas5:hdraid" TYPE="linux_raid_member" /dev/md0: LABEL="raidshare" UUID="862e2ba6-aaf9-4d9f-ae8d-1e2cb99e78d7" TYPE="ext4" /dev/sdf: UUID="7b4545ec-1f56-9aae-f506-a4f44ee9b4fe" UUID_SUB="afe38fba-ce2d-e714-34e2-fec9f83afe55" LABEL="pronas5:hdraid" TYPE="linux_raid_member"
Alles anzeigen
The output shows details about the block devices (Harddisks and raids).
2. Raid status
To check the status of a raid there are two commands you can use. A short oversight gives cat /proc/mdstat:
root@pronas5:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdf[3] sdc[0] sdd[1] 209649664 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
To go into detail use mdadm --detail /dev/md<number_of_raid> (Usually 127 or 0):
root@pronas5:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Dec 8 08:47:11 2014 Raid Level : raid5 Array Size : 209649664 (199.94 GiB 214.68 GB) Used Dev Size : 104824832 (99.97 GiB 107.34 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Dec 9 11:41:17 2014 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : pronas5:hdraid (local to host pronas5) UUID : 7b4545ec:1f569aae:f506a4f4:4ee9b4fe Events : 1333 Number Major Minor RaidDevice State 0 8 32 0 active sync /dev/sdc 1 8 48 1 active sync /dev/sdd 3 8 80 2 active sync /dev/sdf
Alles anzeigen
-
- #3
System check and harddisk/raid speed tests in a nutshell
A collection of useful commands for inventory and speed tests
Part III: Speed tests
1a. Check with dd to tmpfs
Attention: You can alter the values for the file size and the number of counts, but take care about the free space on your
devices. A good idea is to check the free space with df -h before you change these values.Now let’s do some speed tests and start with dd like thisdd conv=fdatasync if=/dev/sda of=/tmp/test.img bs=1G count=3:
root@DATENKNECHT:~# dd conv=fdatasync if=/dev/sda of=/tmp/test.img bs=1G count=3 3+0 Datensätze ein 3+0 Datensätze aus 3221225472 Bytes (3,2 GB) kopiert, 23,865 s, 135 MB/s
This command «copies» a file of 1G size from sda to /tmp/test.img (You have to remove this file afterwards — take care: Count=3 means that the target file will have a size of 3G, more will probably fill up the /tmp-fs partition).
In other words, this shows the transfer speed from sda (A raid harddisk) to tmpfs (Virtual memory). conv=fdatasync let dd wait until the whole file transfer is finished, so it gives some kind of raw speed. Count=3 let it run three times so it makes sure that caching does not falsify the result.
You can test that with single disks or the whole raid.1b. Check with dd to the boot drive
Change the target to a directory inside of rootfs (I.e. /home).
dd conv=fdatasync if=/dev/sda of=/home/test.img bs=1G count=3:root@DATENKNECHT:/home# dd conv=fdatasync if=/dev/sda of=/home/test.img bs=1G count=3 3+0 Datensätze ein 3+0 Datensätze aus 3221225472 Bytes (3,2 GB) kopiert, 71,7702 s, 44,9 MB/s
You see that this is much slower, but this gives you a real speed result between a data drive and the boot drive.
2. Throughput of CPU, cache and memory
A test for checking the throughput of CPU, cache and memory of the system _without_ disk reads is hdparm with the parameter -Tt hdparm -Tt /dev/md127:
root@DATENKNECHT:/tmp# hdparm -Tt /dev/md127 /dev/md127: Timing cached reads: 2224 MB in 2.00 seconds = 1112.99 MB/sec Timing buffered disk reads: 1124 MB in 3.00 seconds = 374.60 MB/sec
Run it several times on a system without activity.
3. dd to /dev/null
Next transfer speed test is a dd-test again, this time without creating a file but to /dev/null so it gives a speed result from every single disk without copying to another one with dd if=/dev/sda of=/dev/null bs=1G count=3:
root@DATENKNECHT:/tmp# dd if=/dev/sda of=/dev/null bs=1G count=3 3+0 Datensätze ein 3+0 Datensätze aus 3221225472 Bytes (3,2 GB) kopiert, 17,7039 s, 182 MB/s
The conv-parameter from the example above is not allowed when using /dev/null.
The same using the whole raid:root@DATENKNECHT:/tmp# dd if=/dev/md127 of=/dev/null bs=1G count=3 3+0 Datensätze ein 3+0 Datensätze aus 3221225472 Bytes (3,2 GB) kopiert, 8,41633 s, 383 MB/s
Ok, I think you got it now. The values above are IMHO not bad, the Asus board was a good decision. You may encounter other values even if the tests run on similar hardware, but this is nothing to worry about — every single system has it’s peculiarities starting from the PSU power over bios and HDD firmware versions, cabling and other conditions, even temperature differences can lead to different results.
I would be happy if this is somehow useful.
Questions / Problems / Discussions
System check and harddisk/raid speed tests in a nutshell