SSD Drives in a Large RAID Array Performance Benchmarking and Results


 

SSD Drives being one of the newest forms of storage have been making rapid advancements in speed. It’s fairly common now that SSD drives are pushing the limit on whatever the latest throughput is on an interface.

Of course we’re always looking for faster. So the question is how can we make it faster if each drive is now currently pushed to the bandwidth limitation of SATA?

The answer is surprisingly easy enough that any IT or enthusiast should be able to tell you. It’s a RAID. RAID utilizes multiple Hard drives to collectively increase the speed of the volume. The interesting part however is that our usually knowledge on RAID performance comes from hard drives and while SSD Drives perform the same function as a hard drive the performance is astronomically different which creates some interesting circumstances.

The rule of thumb for RAID historically with hard drives is as follows.

Read Performance Write Performance
RAID 0 RAID 0
RAID 10 RAID 10
RAID 1 / Raid 5 RAID 1
RAID 5

But SSD drives are much faster but will that fundamentally change how fast a RAID array?

 

The Test Bed

For this test we’re using a Dell Server the specs are below,

Hardware

  • Dell PowerEdge R720 Server
  • 2x Intel Xeon E5-2640
  • 96GB of Ram
  • Perc H710 Mini w/ 512mb ram (FW 21.2.0-0007)
  • 8x Samsung SSD 840 EVO 250GB
  • 1TB Seagate HD

Software

  • Windows Server 2008 R2
  • HD Tune Pro

Raid Configuration

  • 64KB Stripes
  • 8xSSD Raid 5
  • 8xSSD Raid 10
  • 8xSSD Raid 6
  • 8xSSD Raid 50

 

Benchmarking results

For the benchmarking test we used HD Tune Pro 5.50 to run the test.

 

We took the average of testing the entire volume using 1MB chunks.

The most surprising results in the test is that RAID 10 is the slowest of all the RAID levels in Read and Write. Considering that the main goal of RAID 10 is to provide the best performance with redundancy it’s a bit shocking.

So now the question is why? What’s going on that is causing this discrepancy. For that we require more data. The next set of test we’re going to benchmark each RAID starting at the smaller array till we hit 8 drives on the benchmark.

Raid 0 Benchmarking Results

 

 

 

 

 

 

 

 

 

 

 

From this we can see that after 6 drives the gain are fairly low. Also Raid 0 at 7 drives seems to have an odd issue. This was retested multiple times but that low read throughput was consistent.

As you can see by the curve in this configuration the gains we get are smaller and smaller the more drives you add. At a 2 drive Raid 0 it’s almost 93% faster than 1 drive. But by the time we go from 4 drives to 5 drives the gain is only 15%. Eventually we’ll get to a point where there are no noticeable gains in speed. For the write speed which is already slower it’s mostly level at 5 drives already.

Raid 5 Benchmarking Results

 

 

 

 

 

 

 

 

 

 

 

As expected with Raid 5 we’re getting faster read speeds but slower write. That hasn’t changed any. Although you can see that we’re reaching the upper speed of ~1800MB/s with the Raid 5 speed as we did with the Raid 0 speed. Looking at it, it seems that this might imply the bottleneck at this speed is due to something in the RAID controller or further down the line. Ignoring the point where it reaches 1800 anyhow. Much odder is the write speed also caps at 1200 for both the RAID 0 and RAID 5. Well have to see if this is a persistent trend.

Raid 6

 

 

 

 

 

 

 

 

 

 

 

We can see here the extra parity causing a massive slowdown in comparison to the other RAIDs. You can also see a curve start to form but without additional drives we can’t conclude if this also suffers from slowdown at 1800MB/s

Raid 10

 

 

 

 

 

 

 

 

 

 

 

Another surprising result, I was expecting that at a 4 drive Raid 10 it would be about the speed as a 2 drive Raid 0. The mostly flat line here shows that it’s a very minor correlation, if any. Another major surprise is that at 4 drive’s it’s faster than a 3 drive RAID 5 which would give you the same amount of storage but compared to a 4 drive RAID 5 the read speed is on par and surprisingly enough the write speed is faster in the RAID 5. So the general statement that RAID 10 is faster than RAID 5 only apples in a small scale as RAID 10 is the worst scaling in terms of performance and storage than any other RAID.

Raid 50

 

 

 

 

 

 

 

 

 

 

 

With only 2 data points in the RAID 50 we can’t actually draw that many real conclusions from it. The results themselves leave many questions wanting to be answered. It seems as if the performance for 6 drive RAID 50 is twice the performances of a 3 drive Raid 5, minus an overhead of ~230MB/s. The Write throughput there’s not enough data at all to get any conclusions on what’s going on.

 

Conclusion

Going through the data it seems RAID 10 is going obsolete. Previous it was thought that the parity system in RAID 5 would tax write performance. While it does in this case it’s not as much as the redundant drive. At 4 drives the minimum drives need for RAID 10 it’s on par as the performance of a RAID 5 at 4 drives but the RAID 5 offers more storage. As you add more drives the gap just widens further and further. The next larger RAID 10 array at 6 drives already RAID 50 is much more sufficient.

However this is only the case in SSD drives using a powerful raid controller. As we see here this setup caps around 1800MB/s it would be interesting to see if we had more drives RAID 50 might also be a good contender.


One response to “SSD Drives in a Large RAID Array Performance Benchmarking and Results”

  1. If you add a CPU stresses graph it would give additional information, that software RAID 0 have lowest requirements.