✨ We've just launched our NEW website design!

Learn More Here
GuidesStorage

A Year With NVMe RAID 0 in a Real World Setup

Introduction


RAID 0 setup 1

There are a lot of misconception and some right information out there when it comes to the combination of solid state drives with RAID setups and what better way is there to get to the root of them than to create a real world test setup and find out. I have been running this setup for a year now and it is time to take a closer look at how well the two drives in my setup have performed.

Performing a test like this would already be a fun one with normal SSDs, but it would be even more fun to do it with NVMe drives that on themselves are around four times faster than SATA3 drives. Since I got a motherboard that supports two NVMe M.2 drives and has Intel RAID onboard, why not go full hawk and go for the best. At least, that was my thinking.

RAID 0 setup 2

The two drives that I used for this test are Samsung SM951 drives with 256GB capacity each. At the time when I started this test, the Samsung SM951 drives were the only M.2 NVMe drives available for consumers, at least for a reasonable price. On top of that, I also scored a great deal on drives which makes this who project even more fun. And I can ensure that it is a lot of fun to enjoy the performance, speed, and responsiveness of a RAID NVMe setup on a daily base.

aywr-photo-drives-top

Having your drives set up in a RAID does have a few disadvantages, at least when running Windows. For example, we don’t have direct access to the S.M.A.R.T. information of the individual drives without breaking the RAID and examining them individually. Since I will need to do this for this article, I’ll simply create a clone of my setup onto another drive and that will allow me to free up the RAID array, destroy it, and examine the drives individually.

Technically, TRIM shouldn’t be a problem with modern drives even during RAID setups, but people are still worried about it and whether it will have any effect on their drives. The same goes for the wear-leveling and garbage collection on the drives. This long-term and real-world test will allow us to see the impact for ourselves which should leave no more doubt in people’s minds about the effectiveness and costs of running an SSD RAID setup. It will also give us a great view on how much data actually is written within a year’s usage.
aywr-photo-drives-bottom

Basic Drive Specification

  • Client SSD: MZVPV256HDGL
  • M.2 2280 Form factor
  • PCI Express Gen3 x4 and NVMe 1.1
  • Sequential read/write performance: 2150/1260 MB/s
  • Random read/write performance: 300K/100K IOPS
1 2 3 4 5 6 7Next page

Related Articles

20 Comments

  1. Another thing that should definitely be mentioned as a con is that the likelihood of complete data loss is significantly increased with RAID0…

    1. Which, even IF true, is irrelevant if the array is virtualised cache or scratch file space – as the alternative is way riskier (and more power hungry & expensive) dram.

      Ar worst, apps can recover from any outages. Nothing valuable is lost.

    2. considering the miniscule failure rate of these drives doubling your chances leaves you with better odds than a hard drive.

    3. That’s just crazy talk.

      With SSDs, lifetime is almost entirely determined by the total number of writes. In a RAID0 setup with two drives, each drive is receiving half the number of writes for a given workload, therefore, can be expected to have roughly twice the lifetime.

      It’s not going to be any less reliable than a single SSD of twice the capacity as the MTBF due to writes for each drive will approximately double.

      I tried RAID0 with hard drives before and gave up because spinners fail with alarming regularity for me no matter how careful I am with them. My experience with RAID0 and SSDs has been the complete opposite.

  2. The Z170 chipset has a DMI 3.0 link with x4 lanes @ 8 GHz and the 128b/130b “jumbo frame”:
    x4 lanes @ 8 GHz / 8.125 bits per byte = 3.94 GB/second. As such, this is the max upstream
    bandwidth of M.2 devices that are connected downstream of that DMI 3.0 link, regardless of
    the sheer number of such M.2 devices assigned to a RAID-0 array. As long as Intel caps
    its DMI link at 32 Gbps, an x16 lane NVMe RAID controller is needed to exceed the maximum
    imposed by that DMI link e.g. Highpoint RocketRAID 3840A (x16 edge connector + 4 U.2 ports).

    1. Exactly. A single 960 pro could saturate~ the chipsetS 4 lanes.

      I wonder how a pair of 960 proS would go on honest native mobo nvme ports like threadripper or epyc.?

  3. Really cool article, Bohs.
    Just a small correction: your “Access Times” graph title on page 6 says “higher is better” when it should read “lower is better”.

  4. Interesting article, but I am not convinced RAID is a good idea with SSD/Flash memory. As you stated: “No trim…wear-leveling.. garbage collection…” etc
    Those drives work with a “controller” which is basically a little mini-computer with firmware, and everything I have researched says that those controllers cannot tell the drive to evenly wear the drive in RAID 0… Not to mention that I believe that you get zero real-world benefits with your memory controller both Saturated and tasked with playing bit-traffic-controller.

    Also, you said your computer was heavily used, and this is a bit relative or subjective, but my math says your drives were LIGHTLY used. I used my own (in my opinion MODERATE) numbers as a baseline of comparison. I do not consider myself someone who puts a lot of data on my drives.

    I downloaded CrystalDiskInfo and checked my less than 4-year old Samsung EVO SSDs. They showed 27765 hours… (true 24×7 operation) with over 4 times the data written to the drive, and all I do is play Skyrim, Witcher3, and Planet Coaster… and watch netflix. I also dealt with a couple reimages and a bunch of wedding and honeymoon pics, but these were anomolies. The point is: your test is, respectfully, far too anecdotal, and is based on too-limited a data set to draw conclusions. Your tools cannot account for all the NAND cells and determine how much life remains (whether wear was evenly spread) and with your capacities and the moderate amount of data you’ve written to it in (less-than) 1 year of actual (moderate) use… it would be irresponsible, IMO, for anybody to draw conclusions about putting these expensive drives in RAID.

    If anybody reads my comments, and sees the comments of others, please DO NOT follow this example. RAID and NVME / SSDs should NOT be mixed. It WILL inevitably reduce the lifecycle and/or performance of the drive relative to its life without RAID. You may not see it in a year or even two (these things are rated to LAST for 171 years of light-use (1.5 Million hours Average time before failure *MTBF)… But if any of these really ever last a lifetime (doubtful) it will NEVER happen in RAID.

    Also ask yourselves (WHY!!!) put this in RAID? Even with older HDD’s RAID had no real-world benefits in things like FPS or game-loading times, system responsiveness at startup, or windows boot-times (which due to the customary RAID screen during post) would typically be slower. RAID was only good for reading huge files, which I admit I rarely ever did. Just use these things in AHCI, by themselves. A Samsung 960 EVO m.2 at 3200MBps is over 6 times faster than most SSDs. It would make great bragging rights as a boot drive. In an all SSD system, I would opt for a cheaper standard SSD for storage, and keep the NVME drive as my boot drive only, at which point 256Gb/ 500Gb Max… would do the trick.

  5. Having multiple drives is an headache – getting Users to save on different places just dont work…
    Reading big files happens here, i got some folks working with point clouds, there is a lot of data to shuffle there =)

  6. Why do you say TRIM is not important? There is no way a drive can tell if a datablock has been erased without TRIM. Because you write in 4k pages but a SSD writes in larger multiples of this, if a block is marked as having data on it, the whole datablock has to be read into memory then merged with the new data and written back. If the datablock has been flagged as empty via TRIM, then you do not need to do the READ at all.

    There is literally NOTHING ELSE that replaces TRIM functionality. Without TRIM, you WILL get degraded performance over time/use. Many people don’t understand garbage collection is not the same thing.

    See articles like https://searchstorage.techtarget.com/definition/TRIM for more info.

    1. Where does it say that TRIM isn’t important? it says it shouldn’t be a problem with modern drives. That the only thing it says about trim. Why isn’t it a problem? because most modern drives had RAID engine within the used controller, allowing TRIM to work just fine in raid setups too

  7. Although they are the standard which everyone uses, tests, and reports on, 2 drive RAIDs were always a bit ho-hum, imo. The real magic starts at 3 striped drives, and RAID doesn’t really come into its own until you have at least 4 striped drives. You can add mirroring or parity all you want, but the real performance happens when striping across 4 or more drives. But then, I’ve never seen any reasonably priced hardware that allows a 4 drive NVMe RAID; only 2. And that’s a real shame.

  8. I have another question.
    I have crucial P1 nvme drive. It comes with special driver, which, according to explanation, converts some memory into cache and therefore to write and read speeds are increased. Dramatically! My tests start from 9GBps read/write speeds and that give tremendous speed boost. I can definitely tell if it’s working or not when booting my PC. Either I get my desktop in around 2-3 seconds, or I have to wait for 10-5, if that cache is turned off. So, it looks like I’m going to loose that driver specific feature, if I combine into raid 0.

  9. Awesome data and article.
    Although my conclusion would be its not a good idea. Assuming you’re using these for the OS, then the access times are almost all that matters. Do the raid on the storage drives.

  10. What about doing raid 0 with 3 or 4 of these drives. Would the Access time penalty make it not worth while?

  11. One question guys. I am going to get a 5900x, 64GB 3200mhz, Msi X570 tomahawk. I would like to put 2 nvme, 1 install win and programs and 1 to use for 3D projects and after effects. I happened to come across some tests, where they said that the second nvme would not go at optimal speed, but it would be limited due to the occupied bandwidth. then it would go down like an SSD. it’s true’? then it would be better to mount the S.O. on SSD and keep the NVME for editing and scratch disk? thanks

Leave a Reply to The big A Cancel reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker!   eTeknix prides itself on supplying the most accurate and informative PC and tech related news and reviews and this is made possible by advertisements but be rest assured that we will never serve pop ups, self playing audio ads or any form of ad that tracks your information as your data security is as important to us as it is to you.   If you want to help support us further you can over on our Patreon!   Thank you for visiting eTeknix